EDPB backs global privacy statement on AI-generated imagery and harm to identifiable people
The EDPB has signed a joint Global Privacy Assembly statement warning that AI-generated imagery and video can create serious privacy, dignity, and safety harms when real people are depicted without consent.
Summary
The EDPB has signed a joint Global Privacy Assembly statement warning that AI-generated imagery and video can create serious privacy, dignity, and safety harms when real people are depicted without consent. On 23 February 2026, the European Data Protection Board said its chair had signed a joint Global Privacy Assembly statement on AI-generated imagery and the protection of privacy. According to the EDPB, the statement reflects the position of 61 authorities worldwide and addresses the growing use of AI systems that generate realistic images and videos depicting identifiable people without their knowledge or consent.
What happened
On 23 February 2026, the European Data Protection Board said its chair had signed a joint Global Privacy Assembly statement on AI-generated imagery and the protection of privacy. According to the EDPB, the statement reflects the position of 61 authorities worldwide and addresses the growing use of AI systems that generate realistic images and videos depicting identifiable people without their knowledge or consent.
Who is affected
- platforms and tools enabling realistic AI image or video generation
- individuals whose likeness may be used without consent
- children and other vulnerable groups exposed to bullying, abuse, or exploitation through synthetic media
Why it matters
This matters because privacy regulators are treating synthetic imagery as a concrete rights-and-safety problem rather than a speculative future issue. The EDPB statement explicitly points to non-consensual intimate imagery, defamatory depictions, and other harmful content involving real individuals. It also highlights risks to children and other vulnerable groups. That puts generative imagery more firmly inside mainstream privacy and enforcement attention, not just AI ethics commentary.
Assessment
This matters because privacy regulators are treating synthetic imagery as a concrete rights-and-safety problem rather than a speculative future issue. The EDPB statement explicitly points to non-consensual intimate imagery, defamatory depictions, and other harmful content involving real individuals. It also highlights risks to children and other vulnerable groups. That puts generative imagery more firmly inside mainstream privacy and enforcement attention, not just AI ethics commentary.
Key follow-on points to watch include:
- whether national regulators issue more specific guidance or begin enforcement activity around AI-generated intimate or defamatory content
- whether platforms adopt stronger controls for likeness protection, complaint handling, and harmful synthetic media detection
- whether future privacy statements expand from high-level principles into concrete compliance expectations
Recommended actions
- check whether internal policies, rights handling, or governance workflows would withstand regulator scrutiny
- review where AI deployment or generated content workflows create new exposure or oversight gaps
- monitor follow-on developments, especially whether national regulators issue more specific guidance or begin enforcement activity around AI-generated intimate or defamatory content
- whether platforms adopt stronger controls for likeness protection, complaint handling, and harmful synthetic media detection
- whether future privacy statements expand from high-level principles into concrete compliance expectations