Artificial Intelligence

Automation Bias

The tendency for humans to over-rely on automated systems, accepting AI outputs without sufficient scrutiny even when those outputs are wrong. Automation bias increases with system accuracy — the more often the AI is right, the less likely humans are to catch the times it's wrong.

Why It Matters

Automation bias is the hidden risk in human-oversight models. Putting a human in the loop provides no safety benefit if that human rubber-stamps every AI output. Governance must design for genuine human engagement, not performative oversight.

Example

Pilots using automated landing systems have been shown to accept incorrect altitude readings from the computer even when visual cues clearly contradicted the instruments — a documented pattern that has contributed to aviation accidents.

Think of it like...

Automation bias is like a student copying their calculator's answer without sanity-checking it — if the calculator says 2+2=5 because of a stuck key, the student writes 5 because 'the calculator said so.'

Related Terms