Automation Bias
The tendency for humans to over-rely on automated systems, accepting AI outputs without sufficient scrutiny even when those outputs are wrong. Automation bias increases with system accuracy — the more often the AI is right, the less likely humans are to catch the times it's wrong.
Why It Matters
Automation bias is the hidden risk in human-oversight models. Putting a human in the loop provides no safety benefit if that human rubber-stamps every AI output. Governance must design for genuine human engagement, not performative oversight.
Example
Pilots using automated landing systems have been shown to accept incorrect altitude readings from the computer even when visual cues clearly contradicted the instruments — a documented pattern that has contributed to aviation accidents.
Think of it like...
Automation bias is like a student copying their calculator's answer without sanity-checking it — if the calculator says 2+2=5 because of a stuck key, the student writes 5 because 'the calculator said so.'
Related Terms
Human-in-the-Loop (HITL)
A system design pattern where a human reviews and approves every AI output before any action is taken. HITL provides the maximum level of human oversight but constrains the system's speed and scalability to the pace of human review.
Human-on-the-Loop (HOTL)
A system design pattern where AI operates autonomously but a human monitors outputs and retains the ability to intervene, override, or shut down the system when needed. HOTL balances automation efficiency with human oversight for systems where reviewing every output isn't feasible.
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.