Artificial Intelligence

Hallucination (AI)

When a generative AI model produces outputs that are factually incorrect, fabricated, or inconsistent with reality, while presenting them with apparent confidence. Hallucinations are an inherent property of how language models generate text — they produce statistically plausible sequences, not verified facts.

Why It Matters

Hallucinations undermine the reliability of every AI-generated output. In high-stakes domains like healthcare, legal advice, and financial reporting, an AI hallucination presented as fact can cause real-world harm and significant liability.

Example

A legal AI assistant asked to find relevant case law generates citations to cases that don't exist — complete with fabricated case names, docket numbers, and judicial opinions. A lawyer who submits these to court faces sanctions and professional liability.

Think of it like...

AI hallucination is like a very confident tour guide who makes up historical facts when they don't know the answer — the delivery is polished and convincing, but the information is fabricated.

Related Terms