Bias (AI)
Systematic errors in AI system outputs that produce unfair or skewed outcomes. AI bias can originate from training data (historical, representation, measurement, sampling, or aggregation bias), from model design choices, or from the deployment context. Bias is not always obvious and can compound through the AI lifecycle.
Why It Matters
AI bias is the most common source of real-world AI harm. It has led to discriminatory hiring tools, racially biased criminal risk scores, and gender-skewed credit limits — each a governance failure that proper testing and monitoring could have caught.
Example
An AI-powered healthcare allocation system was found to assign lower risk scores to Black patients than equally sick white patients, because it used healthcare spending (a proxy that reflects systemic access disparities) rather than actual illness severity as a training signal.
Think of it like...
AI bias is like a crooked measuring tape — every measurement looks precise, but they're all systematically off in the same direction, and the error compounds the more you build on it.
Related Terms
Fairness (AI)
The principle that AI systems should produce equitable outcomes and not discriminate against individuals or groups based on protected characteristics. Multiple mathematical definitions of fairness exist — demographic parity, equalized odds, individual fairness, and others — and they frequently conflict with each other, making fairness a design choice, not a single metric.
Disparate Impact
A facially neutral policy, practice, or algorithm that disproportionately harms a group based on a protected characteristic — even without discriminatory intent. In AI, disparate impact commonly occurs when models trained on historically biased data reproduce or amplify those patterns in their outputs.
Disparate Treatment
Intentional discrimination based on a protected characteristic. In AI systems, disparate treatment occurs when protected attributes like race, gender, or age are explicitly used as input features for decision-making, or when different rules are applied to different groups by design.