AI Literacy Guide
Questioning AI3 min read

When not to trust AI

The accountability gap.

There are specific contexts where AI outputs should not be trusted without expert verification, particularly in medical, legal, financial, and safety-critical decisions. AI has an "accountability gap": it can't be held responsible for its mistakes, so the final judgement must always come from a qualified human professional.

The hard no: the high-stakes zone

AI is a great brainstorming partner, but it's an unreliable expert. There are specific contexts where an error isn't just an annoyance: it's a disaster.

  • Medical: Do not use AI for a diagnosis. It can help you find words to describe your pain to a doctor, but it is not the doctor.
  • Legal: AI can summarize a contract, but it can't tell you the legal risks of signing it.
  • Financial: Don't let a bot manage your life savings. It doesn't understand your personal risk or the current market in real-time.

Mind the accountability gap

When a doctor, lawyer, or engineer gives you advice, they are operating within a professional framework of accountability. If they're wrong, there are consequences. If an AI is wrong, the consequences sit with you. Responsible use of AI means knowing exactly when to put a human in the loop to take responsibility for the final outcome.

Related reading

Continue exploring this topic.