Some AI responses carry a higher risk of error than others. Learning to recognize the warning signs helps you know when to verify before acting. Higher-risk responses include: specific facts (dates, statistics, names, citations), recent events, medical or legal specifics, and complex causal claims. Lower-risk uses: general explanations of well-established concepts, drafting and editing, brainstorming, and summarizing content you've already read.
Not all answers are created equal
In the AI world, some questions are "safer" than others. Learning to spot the difference is the first step to building your BS detector. Lower-risk uses are things where you are already the expert, like summarizing a report you just read or brainstorming a birthday party theme. Higher-risk uses are where you're asking the AI to be the expert for you.
The red flags of a hallucination
Be extra cautious when the response contradicts what you already know, or when the response feels too neat and complete. AI is notorious for inventing names, dates, and statistics with confidence.
- Unearned confidence: Real knowledge usually has nuances. If an AI gives you a "perfect" answer to a complicated question, be suspicious.
- The fake citation: AI loves to make up books, URLs, and legal cases. If it gives you a source, go find it yourself.
- Contradicting common sense: If the answer feels "off" or contradicts what you already know to be true, don't ignore that gut feeling. The bot is just predicting text. It doesn't have a logic-checker.
Real knowledge often has rough edges: uncertainty, exceptions, competing views. A flawlessly confident response can sometimes be a signal that the model is generating plausible-sounding text rather than drawing on reliable information.