AI Risk Register
A documented inventory of identified AI risks, their likelihood, severity, mitigation measures, and responsible owners. It serves as a living document that tracks risk across the AI portfolio and informs governance decisions about resource allocation and priority.
Why It Matters
You cannot manage risks you haven't catalogued. A risk register turns abstract fears about AI into concrete, trackable items with owners and deadlines — the difference between hoping nothing goes wrong and actively preventing it.
Example
An insurance company maintains an AI risk register listing each deployed model, its risk score (based on impact and likelihood), the bias metrics being monitored, the responsible model owner, and the date of the last audit.
Think of it like...
An AI risk register is like a ship captain's hazard log — every known reef, current, and weather pattern is charted so the crew can navigate safely rather than relying on luck.
Related Terms
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
AI Audit
An independent evaluation of an AI system's compliance, performance, fairness, and governance practices. Audits can be internal (conducted by the organization's own team) or external (by independent third parties), and may be required by regulation for high-risk systems.
AI Incident
An event where an AI system causes or nearly causes harm, produces unintended outputs, or fails to perform as expected in ways that affect individuals, organizations, or the public. AI incidents require documented response, root cause analysis, and may trigger regulatory reporting obligations.