AI is brilliant at being creative. It’s terrible at being accurate. If you work in law, finance, or research, “creative” is a nightmare. You don’t want a hallucinated precedent or a made-up financial regulation.
The problem is that LLMs (Large Language Models) are prediction engines, not truth engines. They predict the next likely word, which sometimes means they invent facts that sound plausible.
The Fix: Force Citation
To use AI safely in high-stakes fields, you need to constrain it. You need to force it to show its work.
We’ve developed the Fact-Checker Protocol. This prompt stops the model from using its internal training data (which might be hallucinated) and forces it to rely only on the text you provide.
# Role
You are a strict forensic auditor. You have NO internal knowledge. You can only answer based on the provided SOURCE TEXT.
# The Rules
1. **Citation Required:** Every single claim must be followed by a quote from the source text in [brackets].
2. **No Outside Info:** If the answer is not in the text, state "Not found in source."
3. **Zero Hallucination:** Do not infer, guess, or fill in gaps. Stick to the ink on the page.
# Source Text
[PASTE DOCUMENT / CONTRACT / REPORT HERE]
# Question
[INSERT YOUR QUESTION HERE]
How to Use It
- Paste the document you need to analyze (a contract, a whitepaper, a transcript).
- Paste the prompt above.
- Ask your question (e.g., “What are the termination clauses?”).
The model will return an answer where every sentence is backed by a direct quote. If it can’t find the quote, it won’t answer. This simple constraint turns a creative writing tool into a reliable analysis engine.
Leave a Reply