ChatGPT Usage Tips

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,414,643 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems

    204,751 followers

    Anthropic’s “Prompting 101” is one of the best real world tutorials I’ve seen lately on how to actually build a great prompt. Not a toy example. They showcase a real task: analyzing handwritten Swedish car accident forms. Here’s the breakdown: 𝟭. 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗶𝘀 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲. You don’t write the perfect prompt on the first try. You test, observe, refine. Just like any other product loop. 𝟮. 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗺𝗮𝘁𝘁𝗲𝗿𝘀. The best prompts follow a playbook: - Start with task + tone context - Load static knowledge into the system prompt - Give clear rules and step-by-step instructions - Show concrete examples - Ask the model to think step-by-step - Define structured output 𝟯. 𝗗𝗼𝗻’𝘁 𝘁𝗿𝘂𝘀𝘁 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹, 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗱𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗶𝘁. In the first version, Claude hallucinated a skiing accident. Only after adding context, rules, and constraints did it produce reliable results. You wouldn’t let a junior analyst guess on regulatory filings. Don’t let your LLM do it either. 𝟰. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗺𝗽𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲. In traditional software, interfaces are buttons and APIs. In GenAI, the interface is language. Your prompt is the program. Most teams still treat prompts like notes in a playground. High-performing teams treat them like production code. That's why in our IBM watsonx platform, prompts are assets just like code or data. 👉Access the video tutorial here: https://lnkd.in/gUdHc2uy ________________________ ♻️ Repost to share these insights! ➕ Follow Armand Ruiz for more

  • View profile for Ruben Hassid

    Master AI before it masters you.

    786,728 followers

    STOP asking ChatGPT to "make it better". Here's how to better prompt it instead: ☑ Clearly Identify the Issue Rather than a vague “make it better,” specify the exact element that needs change. For example: "Rewrite the second paragraph so it includes three concrete examples of our product’s benefits. The tone must be formal and persuasive. Remove any informal language or redundant phrases." ☑ Divide the Task into Discrete Steps Break the overall revision into a sequence of manageable tasks. For example: "Go through my instructions, step by step. – Step 1: Summarize it in one sentence. – Step 2: Identify two specific weaknesses. – Step 3: Rewrite the text to address these weaknesses, incorporating specific data or examples." ☑ Specify the Format and Level of Detail Define exactly how the final output should look. For example: "Provide the final revised text as a numbered list where each item contains 2–3 sentences. Each item must include at least one statistical fact or concrete example, and the overall response should not exceed 250 words." ☑ Request a Chain-of-Thought Explanation Ask the model to detail its reasoning process before giving the final output. For example: "Before providing the final revised text, explain your reasoning step-by-step. Identify which parts need improvement and how your changes will enhance clarity and professionalism. Then, present the final revised version." ☑ Conditional Instructions to Enforce Compliance Add if/then conditions to ensure all requirements are met. For example: "If the revised text does not include at least two concrete examples, then add a sentence with a real-world statistic. Otherwise, finalize the response as is." ☑ Consolidate All Instructions into One Prompt Integrate all the detailed instructions into a single, comprehensive prompt. For example: "First, identify the section of the text that needs improvement and explain why it is lacking. Next, summarize the current text in one sentence and list two specific weaknesses. Then, rewrite the text to address these weaknesses, ensuring the revised version includes three concrete examples, uses a formal and persuasive tone, and is structured as a numbered list with each item containing 2–3 sentences. Each list item must include at least one statistical fact or example, and the overall response must be no longer than 250 words. Before providing the final text, explain your reasoning step-by-step. If the revised text does not include at least two concrete examples, add an additional sentence with a real-world statistic." ___ Why This Works People never give enough context. And once ChatGPT answers, they never correct it enough. Think about it like an intern. Deep prompting is all about precision: give clear instructions, context & the right corrections. PS: Don't forget to use the new o3-mini model. It's crushing any other one. Yes – even DeepSeek.

  • View profile for Allie K. Miller
    Allie K. Miller Allie K. Miller is an Influencer

    #1 Most Followed Voice in AI Business (2M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 300K+ students - Link in Bio

    1,626,317 followers

    “What AI skill should my team and I actually learn right now?” I will scream this from the rooftops of NYC. ➡️ Learn agent delegation Target a dedicated workflow or task. Assign an AI agent said role, define the outcome, set constraints, and schedule review gates. Treat it like a junior teammate and give it work, while monitoring so you can review for accuracy. Here’s my do-this-now stack, and how I’d run it with a team ⏬ If you’re a beginner: Start with ChatGPT Agent Mode. Open a new ChatGPT chat and change the dropdown to ‘Agent Mode’. It can plan tasks, execute steps, and return cited outputs for market scans, vendor comparisons, executive briefs, and decision memos. Kick off the job, let it run, WATCH IT RUN, and then review the completion. If you’re more technical or ops-heavy: Use Claude Code when the work requires operating UIs or your computer - clicking through portals, filling forms, wrangling spreadsheets, saving down documents. Expect more upfront setup and ownership, so keep a step-by-step prompt checklist, add automatic reruns for failing steps, and update the checklist only when the site’s labels or paths change. If you’re living in Google Workspace: Turn on Google connectors (Drive, Gmail, Calendar) inside ChatGPT or Claude. Ask the model to find your team’s file, summarize threads, compare document versions, prepare for and schedule meetings, or draft from past emails. This lets your agent pull context and act on it without manual hunting. How to turn this into outcomes in 30 days ⏬ → Twice a week, use Agent Mode to produce a one-page brief with citations and a recommendation on a real business question. Track cycle time and data/citation quality, and, where relevant, use Claude Code to automate in parallel. At the end of the month, you should know where a few agents can tackle real work and have the data to support what to scale. #AIinWork

  • View profile for Dan Martell

    📘 Bestselling Author (Buy Back Your Time) 🚀 Building AI startups @Martell Ventures ⚙️ 3x Software Exits • $100M+ HoldCo 💬 DM "COACH" if you're looking to scale

    168,002 followers

    A few weeks ago I told my team that AI needs to do 92% of their work or they'll get left behind. Here’s how we're doing it (and why): Step 1: Get ChatGPT Plus/Pro Step 2: Create your master prompt • Tell AI: "I'm [your role] at [company type]. Create a master prompt for me. Ask me every question you need to give me the most context possible." • Spend 30-45 minutes answering everything it asks • Save the output as a PDF • Upload this to every new chat so AI knows your full context Step 3: Build system prompts Master prompts tell AI who you are. System prompts tell AI HOW to work. Here's the process: • Ask AI to create any output (email, ad, report) • Keep refining until it's perfect (3-6 iterations) • Then ask: "Write the system prompt that would have generated this output" • Save that prompt - it's now your intellectual property Now you have the exact formula to get that quality every time. Step 4: Use project folders  Think of these like rooms in your office with all context on the walls. • Create a project for each major area of your life/business • Upload your master prompt + all relevant documents • Every conversation builds on previous context • Share folders with your team for instant knowledge transfer I use this for investment decisions, business strategy, even family planning. Step 5: Set your custom instructions This makes AI remember how you like outputs formatted. Go to Settings → Personalization → Custom Instructions: • Tell it your communication style (short, bullet points, no fluff) • Remove AI language like "delve" and "moreover"  • Set your default tone and format preferences Never repeat formatting requests again. Step 6: Turn everything into custom GPTs These are your AI employees that do specific tasks consistently. • Take your best system prompts • Create custom GPTs for each repeatable task • Share them with your team • Update once, everyone gets the improvement I have custom GPTs for: emails, content creation, financial analysis, hiring, strategy docs. Step 7: Refine and improve Use AI to teach you AI. • Ask it to create your master prompt • Ask it to write your system prompts  • Ask it to suggest custom instructions • Ask it to help you build better prompts Here's what 92% actually looks like: - Content: AI does research, outlines, first drafts. You edit and add your voice. - Operations: AI creates SOPs, analyzes processes, suggests improvements. You decide. - Finance: AI analyzes reports, creates models, finds insights. You make decisions. - Strategy: AI processes information, suggests options. You choose direction. The 8% that stays human: Vision, taste, final decisions, and emotional intelligence. My team went from thinking AI was "kind of helpful" to saying it's their most valuable employee. It could be yours too. -DM P.S. If you want my complete prompting template and the 7 system prompts that save me 15+ hours per week, MESSAGE ME the word "AI" and I'll send it over. My gift to you 👊

  • View profile for Usman Sheikh

    I co-found companies with experts ready to own outcomes, not give advice.

    56,021 followers

    Prompt engineering is the new consulting superpower. Most haven't realized it yet. Over the last couple of days, I reviewed the latest guides by Google, Anthropic and OpenAI. Some of the key recommendations to improve output: → Being very specific about expertise levels requested → Using structured instructions or meta prompts → Explicitly referencing project documents in the prompt → Asking the model to "think step by step" Based on the guides, here are four ways to immediately level up your prompting skill set as a consultant: 1. Define the expert persona precisely "You're a specialist with 15 years in retail supply chain optimization who has worked with Target and Walmart." Why it matters: The model draws from deeper technical patterns, not just general concepts. 2. Structure the deliverable explicitly "Provide 3 key insights, their implications and then support each with data-driven evidence." Why it matters: This gives me structured material that needs minimal editing. 3. Set distinctive success parameters "Focus on operational inefficiencies that competitors typically overlook." Why it matters: You push the model beyond obvious answers to genuine competitive insights. 4. Establish the decision context "This is for a CEO with a risk-averse investor applying pressure to improve their gross margins." Why it matters: The recommendations align with stakeholder realities and urgency. The above were the main takeaways I took from the guides which I found helpful. When you run these prompts versus generic statements, you will see a massive difference in quality and relevance. Bonus tips which are working for me: → Create prompt templates using the four elements → Test different expert personas against the same problem (I regularly use "Senior McKinsey partner" to counter my position detecting gaps in my thinking.) → Ask the model to identify contradictions or gaps in the data before finalizing any recommendations. We’re only scratching the surface of what these “intelligence partners” can offer. Getting better at prompting may be one of the most asymmetric skill opportunities all of us have today. Share your favourite prompting tip below! P.S Was this post helpful? Should I share one post per week on how I’m improving my AI-related skills?

  • View profile for Chase Dimond

    Top Ecommerce Email Marketer | $200M+ Generated via Email

    446,992 followers

    10 Ways to Use ChatGPT to Improve Your Copy: (With Simple Copy-and-Paste Examples) 1) Trimming Down Goal: Condense your copy for clarity and impact. Focus on: Complex sentences Redundant phrases Long paragraphs Example prompt: "Trim down this [phrase/sentence/paragraph] of my copy." 2) Finding Word Alternatives Goal: Find better synonyms for certain words to enhance readability and engagement. Look to replace: Fillers Jargon Clichés Adverbs Buzzwords Example prompt: "Provide [adjective] alternatives for the word [word] in this copy." 3) Doing Research Goal: Gather detailed information about your target audience to tailor your copy. Consider: Likes Habits Values Dislikes Interests Behaviors Challenges Pain points Aspirations Demographics Example prompt: "Create an ideal customer profile for [target audience]." 4) Generating Ideas Goal: Brainstorm multiple copy elements to keep your content fresh and engaging. Do this for: CTAs Stories Leads Angles Headlines Example prompt: "Generate multiple [element] ideas for this copy." 5) Fixing Errors Goal: Identify and correct any errors in your copy to maintain professionalism. Check for: Spelling mistakes Grammatical errors Punctuation issues Example prompt: "Check this copy for any [type] errors and suggest corrections." 6) Improving CTAs Goal: Make your call-to-actions more compelling and click-worthy. Play around with: Benefits Urgency Scarcity Objections Power words Example prompt: "Give me [number] variations for this CTA: [original CTA]." 7) Studying Competitors Goal: Gain insights from your competitors' copy to improve your own. Analyze their: CTAs USPs Offers Leads Hooks Headlines Example prompt: "Provide a breakdown of [competitor]'s latest [ad/email/sales page]." 8) Nailing the Voice Goal: Refine the tone and voice of your copy to align with your brand and audience. Consider: Target audience Brand guidelines Advertising channel Example prompt: "Make this copy [adjectives] to suit [target audience]." 9) Addressing Objections Goal: Anticipate and address potential customer objections to increase conversion rates. These could be about: Price Quality Usability Durability Compatibility Example prompt: "Analyze this copy to find and address potential objections." 10) A/B Testing Goal: Create variations of your copy's elements to determine what works best. Try different: CTAs Hooks Angles Closings Headlines Headings Frameworks Example prompt: "Generate variations of this [element] for A/B testing: [original element]."

  • View profile for George Stern

    Entrepreneur, speaker, author. Ex-CEO, McKinsey, Harvard Law, elected official. Volunteer firefighter. ✅Follow for daily tips to thrive at work AND in life.

    372,521 followers

    You don't need more hours. You need more effective AI prompts: Use these 21 ChatGPT prompts to save yourself 10 hours next week - 1) Inbox Zero ↳Prompt: "Act as an executive assistant. Here are 10 emails I don't know how to respond to. Draft quick, professional replies I can send or edit." 2) Delegation Help ↳Prompt: "Here are 5 things on my plate. Act like a manager and help me decide what to delegate, and how to frame each task for handoff." 3) Shorter Meetings ↳Prompt: "Act as a meeting consultant. Here's an agenda. Help me trim it by 30% while keeping the outcomes strong and the flow efficient." 4) Daily Focus ↳Prompt: "Act as a productivity coach. Here's my to-do list for tomorrow. Help me pick the 3 highest-impact tasks and create a simple plan to protect time for them." 5) Smarter Scheduling ↳Prompt: "Here's my calendar for the week. Act as a time management expert and help me batch similar tasks, reduce context switching, and free up focus time." 6) Weekly Reset ↳Prompt: "Act like a performance coach. Give me a 15-minute Sunday reset ritual to review the past week, plan the next, and start Monday focused." 7) Automated Systems ↳Prompt: "Act as a workflow expert. Here's a process I repeat every week. Suggest simple ways to automate or streamline it using basic tools." 8) Decision Clarity ↳Prompt: "Here's a decision I'm stuck on. Act like a coach and walk me through a step-by-step framework to decide with more clarity and speed." 9) First Draft Faster ↳Prompt: "Act as a writing assistant. Here's the topic. Help me outline and rough-draft a clear, structured blog post in 15 minutes or less." 10) One-Touch Tasks ↳Prompt: "Here are 10 small tasks I've been putting off. Help me write a quick plan to knock them out in one focused 30-minute sprint." 11) Rapid Research ↳Prompt: "Act as a research assistant. I need to understand this topic fast. Give me 5 reliable sources, a 2-sentence summary, and what I should read first." 12) Pomodoro Plan ↳Prompt: "Act as a productivity coach. Here's a task I've been avoiding. Help me break it into 25-minute sprints with clear goals for each one." 13) Cleaner Docs ↳Prompt: "Act as an editor. Here's a messy doc or note. Clean it up, make it scannable, and pull out a bullet list of key action items." 14) Thinking Partner ↳Prompt: "Act like a thinking partner. Here's a challenge I'm facing. Help me explore 3 different angles or mental models to reframe it." 15) Info Compression ↳Prompt: "Here's a long article or transcript. Summarize the key ideas in 5 bullet points, and give me one actionable takeaway." [Check out the sheet for 6 more] If you're not using AI to help you be more productive, You're wasting hours you could be spending on the things that matter most. Put this sheet to work. You won't regret it. --- ♻️ Repost to help others be more productive. And follow me George Stern for more content like this.

  • View profile for Rahul Agarwal

    AI Agents | GenAI Insights | Agentic AI Strategist | Mentor | 10x Your Career with AI Tools | Simplifying AI | Future of Work | Helping You Upskill

    24,312 followers

    2 ways AI systems today generate smarter answers. I’ve explained both in simple steps below. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻) (𝘴𝘵𝘦𝘱-𝘣𝘺-𝘴𝘵𝘦𝘱) RAG lets AI fetch and use real-time external information to generate fact-based, updated answers. 1. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗾𝘂𝗲𝗿𝘆 – User asks a question or gives input. 2. 𝗘𝗻𝗰𝗼𝗱𝗲 𝗶𝗻𝗽𝘂𝘁 – Convert it into a machine-readable format. 3. 𝗧𝗼𝗸𝗲𝗻𝗶𝘇𝗲 𝘁𝗲𝘅𝘁 – Break the query into small understandable pieces. 4. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀 – Turn text into numeric vectors that capture meaning. 5. 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 – Search a vector database for relevant information. 6. 𝗦𝗲𝗹𝗲𝗰𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 – Pick the most useful retrieved chunks. 7. 𝗙𝗶𝗹𝘁𝗲𝗿 𝗻𝗼𝗶𝘀𝗲 – Remove irrelevant or low-quality data. 8. 𝗙𝘂𝘀𝗲 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 – Combine external info with the model’s internal knowledge. 9. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 – Create an answer using both retrieved data and reasoning. 10. 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗲 𝗼𝘂𝘁𝗽𝘂𝘁 – Check for factual accuracy and coherence. 11. 𝗥𝗲𝗺𝗼𝘃𝗲 𝗯𝗶𝗮𝘀 – Eliminate misleading or biased phrasing. 12. 𝗗𝗲𝗹𝗶𝘃𝗲𝗿 𝗳𝗶𝗻𝗮𝗹 𝗼𝘂𝘁𝗽𝘂𝘁 – Provide the user with a reliable, fact-backed response. __________________________________________________ 𝗖𝗔𝗚 (𝗖𝗼𝗻𝘁𝗲𝘅𝘁-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻) (𝘴𝘵𝘦𝘱-𝘣𝘺-𝘴𝘵𝘦𝘱) CAG lets AI remember past interactions to provide more relevant, personalized, and context-aware responses. 1. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗾𝘂𝗲𝗿𝘆 – User provides input or a task request. 2. 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 𝗶𝗻𝗽𝘂𝘁 – Convert it into a structured format for the model. 3. 𝗜𝗻𝗷𝗲𝗰𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 – Add relevant background (past chats, user data, goals). 4. 𝗥𝗲𝗰𝗮𝗹𝗹 𝗱𝗼𝗺𝗮𝗶𝗻 𝗺𝗲𝗺𝗼𝗿𝘆 – Bring in domain-specific knowledge or prior interactions. 5. 𝗔𝗰𝗰𝗲𝘀𝘀 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗯𝗮𝘀𝗲 – Fetch related internal or external references. 6. 𝗠𝗲𝗿𝗴𝗲 𝗱𝗮𝘁𝗮 – Combine all context and knowledge sources. 7. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗼𝘂𝘁𝗽𝘂𝘁 – Create a response using this rich, aligned context. 8. 𝗩𝗲𝗿𝗶𝗳𝘆 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 – Check the result for logical and contextual accuracy. 9. 𝗘𝘅𝗽𝗮𝗻𝗱 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 – Enrich the response with more relevant details if needed. 10. 𝗔𝗹𝗶𝗴𝗻 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 – Ensure the output fits the user’s prior goals or conversation. 11. 𝗖𝗵𝗲𝗰𝗸 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 – Confirm that everything stays coherent and connected. 12. 𝗗𝗲𝗹𝗶𝘃𝗲𝗿 𝗳𝗶𝗻𝗮𝗹 𝗼𝘂𝘁𝗽𝘂𝘁 – Provide a complete, context-aware, and consistent answer. In short: • 𝗥𝗔𝗚 gives models access to the 𝗿𝗶𝗴𝗵𝘁 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻. • 𝗖𝗔𝗚 helps them use it 𝗶𝗻 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁. Together, they make AI systems: more accurate, more reliable, more personalized and more useful in real-world workflows. ✅ Repost for others in your network who can benefit from this.

  • View profile for Edward Frank Morris
    Edward Frank Morris Edward Frank Morris is an Influencer

    I build AI frameworks, lead strategy, and teach AI to anyone from Fortune 500s to universities. My face has been on NASDAQ, FT, and Forbes. My jokes have not. Yet.

    35,236 followers

    A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️

Explore categories