Skip to main content
1.8

The Critic

Estimated time: 25 minEstimated cost: ~$0.01Tool: Claude Haiku
After this drill, you can:

After this drill, you can evaluate AI output for accuracy, completeness, and hallucination — and you have a systematic process for doing so.

Why this matters

AI produces confident-sounding output whether it's right or wrong. This is the most dangerous property of large language models for everyday users — not that they fail, but that they fail while seeming certain. The Critic drill builds the essential counterbalance: a habit of systematic verification that protects you from AI overconfidence. Every piece of AI output you act on should pass through a version of this check.

How to do it

  1. 1

    Choose a factual claim from any AI response you received in this module

    Pick a specific claim — not a general statement, but a concrete fact that could be checked. "Compound interest was invented in ancient Babylon" or "EU GDPR fines can reach 4% of global annual revenue."

  2. 2

    Run the claim through the Critic process: check, probe, and verify

    First check it yourself if you can. Then use the critic prompt below to ask the model to evaluate its own claim. Then verify with an external source.

  3. 3

    Document one hallucination, one confident-but-vague claim, and one accurate claim

    This forces you to look for all three categories — not just mistakes.

  4. 4

    Add the critic check as a step in your personal workflow

    For any AI output you will act on or share: ask "what claim in this output would cause the most damage if it were wrong?"

The prompt

PROMPT — The Critic Check (evaluate a specific claim)Model: Claude HaikuEst. cost: ~$0.01
I want you to evaluate a claim from your earlier response.

The claim was: [PASTE THE SPECIFIC CLAIM]

Please:
1. Rate your confidence in this claim (high / medium / low)
2. Explain what you would need to be fully confident
3. Identify the most likely way this claim could be wrong or outdated
4. Suggest one external source I could check to verify it
PROMPT — Full Output Critic ReviewModel: Claude Pro
Review the following AI-generated text for potential issues:

[PASTE THE AI OUTPUT TO REVIEW]

Check for:
1. Factual claims that could be verified (list them as bullet points)
2. Claims that are stated with more confidence than is warranted
3. Anything that sounds plausible but is suspiciously convenient or too neat
4. Any outdated information that might have changed

Rate overall reliability as: High / Medium / Low, with one sentence explanation.

Success criteria

  • You ran one claim through the full critic process
  • You documented one hallucination, one vague claim, and one accurate claim
  • You have a one-sentence personal rule for when to apply the critic check
  • You understand the difference between "hallucination" and "confidently vague"

Common mistakes

Asking AI to fact-check itself as the only verification step

AI self-evaluation is helpful but not sufficient. External verification (a search, a primary source, your own knowledge) is the gold standard for high-stakes claims.

Only looking for hallucinations (missing the other failure modes)

Confidently vague claims ("research shows that...") and outdated information are just as dangerous as outright hallucinations. Check for all three.

Applying the critic check to low-stakes output (it becomes paralyzing)

Save the critic check for output you will act on, share with others, or use as evidence. Not every brainstorm needs full verification.