Right to Explanation
The principle that individuals affected by AI-driven decisions should receive a meaningful explanation of how the decision was made, including the logic involved, the significance of the processing, and its envisaged consequences. Grounded in GDPR and reinforced by the EU AI Act's transparency requirements.
Why It Matters
People deserve to understand why an algorithm denied their loan, rejected their job application, or flagged them for investigation. The right to explanation is both a legal requirement and a trust-building mechanism.
Example
When a job applicant is rejected by an AI screening tool, the company provides an explanation: 'Your application was assessed on years of relevant experience, technical skills match, and education level. The primary factor in the decision was insufficient match on required cloud architecture experience.'
Think of it like...
The right to explanation is like requiring a teacher to explain why a student received a particular grade — 'the computer said so' is not an acceptable answer when the outcome matters.
Related Terms
Automated Decision-Making (ADM)
Decisions made solely by automated means without meaningful human involvement. Under GDPR Article 22, individuals have the right not to be subject to decisions based solely on automated processing — including profiling — that produce legal effects or similarly significant impacts on them.
Explainability
The ability to understand and articulate how an AI model reaches its decisions or predictions. Explainable AI (XAI) makes the decision-making process transparent and comprehensible to humans.
Transparency (AI)
The availability of relevant information about an AI system's design, development, data, operation, and limitations to appropriate stakeholders. Transparency answers the broader question 'what is this system and what happened?' and encompasses documentation, disclosure, and communication practices.