EU Bans AI Systems Creating Sexualised Deepfakes

The European Union has agreed to ban AI systems that generate sexualised deepfakes, according to reporting from AFP, RTE, France24 and RFI. Lawmakers framed the prohibition to cover so-called "nudifier" systems that create or manipulate sexually explicit or intimate images of identifiable real people without consent, per France24 and RFI. The European Parliament vote passed with 569 in favour and 45 against, RFI reports. Negotiators also agreed to delay parts of the EU's AI Act implementation-originally due August 2026-pushing deadlines to December 2027 and August 2028 for different classes of systems, according to RTE, France24 and DW. Editorial analysis: This is a notable regulatory tightening on illicit image-generation use cases and a consequential test of enforcement mechanisms for AI content controls in the EU.
What happened
The European Union agreed on a provisional ban on AI systems that generate sexualised deepfakes, according to reporting by AFP, RTE, France24 and RFI. Lawmakers described the targeted systems as so-called "nudifier" tools that "use AI to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person" without consent, per France24 and RFI. The European Parliament vote passed with 569 in favour and 45 against, RFI reports. The ban is being incorporated into amendments to the EU's Artificial Intelligence Act, which was originally adopted in 2024, RTE reports.
The agreement also includes a postponement of certain obligations for high-risk AI systems: coverage that had been due to enter into force in August 2026 will now be delayed to December 2027 for stand-alone systems and to August 2028 for AI embedded in other products, according to RTE, France24 and DW. DW additionally reports that a mandatory watermarking requirement for AI-generated content will apply from December 2, 2027 under the provisional package.
Reporting links the ban to public outrage after non-consensual explicit images were created using the chatbot Grok; RTE and RFI note that the Grok incidents triggered investigations and parliamentary debates under the Digital Services Act and the AI Act. RTE cites Irish Independent MEP Michael McNamara quoting AFP: "Today the EU has drawn a red line. AI must never be used to humiliate, exploit or endanger people. For the first time, EU legislation explicitly bans nudifier applications."
Editorial analysis - technical context
Industry-pattern observations: Regulating specific high-abuse use cases, such as sexualised deepfakes, is a common approach for lawmakers seeking fast, enforceable limits while broader AI rules are negotiated. Targeted prohibitions reduce ambiguity for platforms and content moderators, but they also raise technical enforcement questions about detection accuracy, watermarking robustness, and coverage of models that can be repurposed for illicit outputs. Observers note that watermarking and model access for audits are recurring technical levers in recent regulatory proposals across jurisdictions.
Context and significance
Editorial analysis: For AI practitioners, the EU move formalises a legal boundary on a well-known harmful application, increasing compliance obligations for platform operators and model providers that serve EU users. The provisional delay of high-risk implementation deadlines, reported by RTE and DW, creates more time for businesses to adapt but concentrates attention on enforcement design for the AI Office and national authorities tasked with oversight. Reporting also highlights recent tensions between regulators and frontier-model developers: RTE and France24 cite meetings or ongoing contacts with Anthropic over the Mythos model and the EU's intent to seek model access for inspection once enforcement powers are active.
What to watch
Editorial analysis: Observers should follow three indicators:
- •the final text after trilogue negotiations between the Parliament and member states, per France24
- •the operational rules the EU AI Office issues for watermarking, access to models, and definitions of effective safety measures, as mentioned by RTE and DW
- •enforcement actions and guidance under the Digital Services Act investigations into platforms such as X, which RTE and Europarl referenced in the lead-up to the vote. These elements will determine how practicable compliance is for model builders, hosting providers, and social platforms
Scoring Rationale
This is a significant regulatory development that establishes an explicit legal ban on sexualised deepfakes in the EU and alters AI Act timelines. The decision materially affects platform moderation, model deployment and compliance workstreams for practitioners, but it is not a paradigm-shifting technology release.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


