Blog

  • Stop Hallucinations: The “Fact-Checker” Prompt for Professionals

    AI is brilliant at being creative. It’s terrible at being accurate. If you work in law, finance, or research, “creative” is a nightmare. You don’t want a hallucinated precedent or a made-up financial regulation.

    The problem is that LLMs (Large Language Models) are prediction engines, not truth engines. They predict the next likely word, which sometimes means they invent facts that sound plausible.

    The Fix: Force Citation

    To use AI safely in high-stakes fields, you need to constrain it. You need to force it to show its work.

    We’ve developed the Fact-Checker Protocol. This prompt stops the model from using its internal training data (which might be hallucinated) and forces it to rely only on the text you provide.

    # Role
    You are a strict forensic auditor. You have NO internal knowledge. You can only answer based on the provided SOURCE TEXT.
    
    # The Rules
    1.  **Citation Required:** Every single claim must be followed by a quote from the source text in [brackets].
    2.  **No Outside Info:** If the answer is not in the text, state "Not found in source."
    3.  **Zero Hallucination:** Do not infer, guess, or fill in gaps. Stick to the ink on the page.
    
    # Source Text
    [PASTE DOCUMENT / CONTRACT / REPORT HERE]
    
    # Question
    [INSERT YOUR QUESTION HERE]

    How to Use It

    1. Paste the document you need to analyze (a contract, a whitepaper, a transcript).
    2. Paste the prompt above.
    3. Ask your question (e.g., “What are the termination clauses?”).

    The model will return an answer where every sentence is backed by a direct quote. If it can’t find the quote, it won’t answer. This simple constraint turns a creative writing tool into a reliable analysis engine.


  • Reverse Prompting: The Secret to Cloning Your Best Content

    Ever written something perfect and thought, “I wish I could make AI write exactly like this every time?” You can. And the technique is called Reverse Prompting.

    What is Reverse Prompting?

    Most people try to guess the prompt. They type, delete, and type again, hoping to stumble upon the magic words that created a specific output. Reverse prompting flips this script. Instead of guessing, you feed the output back into the AI and ask it to write the prompt for you.

    The Protocol

    This technique works on ChatGPT, Claude, and Gemini. Use it whenever you have a “gold standard” example (an email, a landing page, a report) and you want to replicate its structure.

    # Role
    You are an expert Prompt Engineer. Your job is to reverse-engineer the prompt that would generate the text below.
    
    # Input Text
    [PASTE YOUR GOLD STANDARD TEXT HERE]
    
    # Instructions
    Analyze the tone, structure, vocabulary, and formatting of the input text. Then, write a reusable prompt template that would cause an LLM to generate new content in this EXACT style. Include placeholders like [TOPIC] or [AUDIENCE] where necessary.

    Why It’s a Game Changer

    This eliminates the “blank page” problem for prompt engineering. You don’t need to know the technical terms for the tone you like (is it “authoritative” or “assertive”?). You just show the AI what winning looks like, and let it build the map to get there.


  • The “Humanizer” Prompt: How to Fix Robotic AI Copy

    Let’s be honest. Most AI writing sounds like it was written by… well, an AI. It’s stiff. It’s repetitive. It loves words like “delve” and “tapestry.” And worst of all? It’s boring.

    If you use tools like ChatGPT or Claude for marketing copy, emails, or blog posts, you know the struggle. You spend more time editing the “robot voice” out than you would have spent writing it from scratch.

    The Problem with Default Prompts

    The default setting for most LLMs is “helpful assistant.” That means polite, neutral, and incredibly verbose. It loves to summarize things with “In conclusion” and structure every paragraph exactly the same way.

    Readers can smell this a mile away. And when they do, they tune out.

    The Solution: The Humanizer Prompt

    We built a prompt specifically designed to strip away those AI habits. It forces the model to write like a veteran copywriter, not a chatbot. It focuses on rhythm, concrete details, and natural phrasing.

    # Role
    You are a veteran copywriter and editor. Your enemy is “slop”—generic, robotic, filler-heavy text. Your goal is to rewrite the provided content so it sounds like it was written by a smart, engaging human.
    
    # Constraints (The “Human” Rules)
    1.  **NO Hyphens:** Do not use hyphens (-) or em-dashes (—) for pauses. Use commas, periods, or start a new sentence.
    2.  **No “AI Tells”:** Ban these words/phrases: “In today’s digital landscape,” “delve,” “tapestry,” “it is important to note,” “leverage,” “foster,” “moreover,” “furthermore.”
    3.  **Rhythm:** Vary sentence length. Mix short, punchy sentences with longer, flowing ones. Never use the same structure twice in a row.
    4.  **Voice:** Use contractions (don’t, it’s, we’re). Be opinionated. Address the reader directly (“you”).
    5.  **Concrete Details:** If the input says “various solutions,” you rename them to “tools like X and Y.” Kill abstraction.
    
    # Input Text
    [PASTE ROBOTIC TEXT HERE]
    
    # Instructions
    Rewrite the input text completely. Keep the core meaning but change the structure, tone, and vocabulary. Make it sound like a conversation, not a textbook.

    Why This Works

    This prompt works because it attacks the specific patterns that LLMs fall into. By banning the “tells” and forcing sentence variety, you break the predictive nature of the model.

    Give it a try on your next newsletter or social post. You might be surprised at how much better it sounds.