AI Literacy Guide
Understanding AI4 min read

The hallucination problem

Why AI makes things up.

Hallucination is when an AI generates false information with total confidence. It happens because LLMs are trained to be helpful and plausible, not factually accurate. They predict the most likely next word, which can result in made-up citations, fake historical events, or "facts" that don't exist.

The confident liar

Hallucination is the industry term for when an AI lies to your face. Because LLMs aren't retrieval systems, they don't look things up in a database. When a model doesn't have reliable info, it doesn't say "I don't know." It generates what looks like a reasonable answer. It's a feature of the architecture, not a bug that can be easily fixed.

Can we stop the lies?

Reducing hallucination is the Final Boss of AI research. We're seeing improvements with Retrieval-Augmented Generation (RAG), which kind of gives the AI a textbook to look at while it answers, but as long as these systems are built on prediction, the risk of a "confident guess" remains. Verification isn't optional: it's the only way to play.

Related reading

Continue exploring this topic.