Deepfake
AI-generated synthetic media — video, audio, or images — that convincingly depicts real people saying or doing things they never did. Deepfakes use generative AI techniques to create media that is increasingly difficult to distinguish from authentic recordings, even by trained observers.
Why It Matters
Deepfakes threaten trust in digital media, enable fraud and impersonation, and can be weaponized for disinformation. The EU AI Act requires transparency labeling for AI-generated content, and governance programs must address deepfake risks in both creation and detection.
Example
A CEO deepfake video call was used in a fraud scheme that tricked a finance employee into transferring $25 million. The video quality was sufficient to fool the employee in a live video call, highlighting both the technical sophistication and real-world financial risk of deepfakes.
Think of it like...
Deepfakes are like digital ventriloquism — they put words in real people's mouths and actions on real people's faces, creating a world where seeing is no longer believing.
Related Terms
Generative AI
AI systems that can create new content — text, images, music, code, video — rather than just analyzing or classifying existing data. These models learn patterns from training data and generate novel outputs that resemble the original data.
Transparency (AI)
The availability of relevant information about an AI system's design, development, data, operation, and limitations to appropriate stakeholders. Transparency answers the broader question 'what is this system and what happened?' and encompasses documentation, disclosure, and communication practices.
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.