ISO 42005
The ISO/IEC standard providing guidance on conducting AI system impact assessments. It supports organizations in systematically evaluating how AI systems affect individuals, groups, society, and the environment — complementing both ISO 42001's management system and regulatory impact assessment requirements.
Why It Matters
Impact assessments are required by multiple regulations (GDPR, EU AI Act) and recommended by most frameworks (NIST, OECD). ISO 42005 provides a standardized methodology that satisfies multiple requirements with one consistent process.
Example
A government agency uses ISO 42005's framework to conduct a comprehensive impact assessment of its AI-powered benefits eligibility system, covering data privacy, algorithmic fairness, accessibility, and effects on vulnerable populations in a single structured evaluation.
Think of it like...
ISO 42005 is like having a standardized environmental impact assessment template — instead of inventing your evaluation method from scratch, you follow a proven structure that regulators recognize.
Related Terms
ISO 42001
The first international standard for an AI Management System (AIMS), published by ISO/IEC. It provides a certifiable framework for organizations to establish, implement, maintain, and continually improve responsible AI governance. Compatible with other ISO management system standards like ISO 27001.
Algorithmic Impact Assessment (AIA)
A systematic process to evaluate the potential impacts of deploying an algorithmic system on individuals, groups, and society. It identifies risks before deployment and maps out mitigation strategies, serving as both a compliance tool and a design checkpoint.
Fundamental Rights Impact Assessment (FRIA)
An assessment required under the EU AI Act for deployers of high-risk AI systems that evaluates the system's impact on fundamental rights — including non-discrimination, privacy, freedom of expression, and human dignity — before deployment begins.