EU Examines Regulating OpenAI Under Digital Services Act
The European Commission is assessing whether OpenAI's ChatGPT meets the Digital Services Act threshold for designation as a very large online search service after OpenAI published user figures above the 45 million monthly threshold. OpenAI reported 120.4 million average monthly active recipients in the EU for the six months to September 2025. Commission spokesman Thomas Regnier said services are assessing the information and that Large Language Models could be in scope on a case-by-case basis. If designated, ChatGPT would face stricter DSA obligations including enhanced transparency, risk mitigation, independent auditing, and operational accountability. The review signals intensified regulatory scrutiny of generative AI in Europe and may set operational and compliance precedents for other model providers.
What happened
The European Commission is formally assessing whether OpenAI's ChatGPT should be designated under the Digital Services Act after OpenAI published user figures above the 45 million DSA threshold. OpenAI disclosed approximately 120.4 million average monthly active recipients in the EU over the six months to end-September 2025. Commission spokesman Thomas Regnier said the Commission services are "currently assessing this information" and that Large Language Models could fall into scope on a case-by-case basis. German outlet Handelsblatt reported the company would be classified as a very large online search service, a designation Reuters later referenced in corrected copy.
Technical details
Designation under the DSA (for very large online platforms or search services) triggers a set of escalated compliance obligations focused on systemic risk and transparency. Practitioners should expect requirements around:
- •independent risk assessments and mitigation measures for systemic harms
- •increased transparency reporting and algorithmic accountability, including documentation of ranking and recommendation logic
- •external audits and oversight access to platform data for regulators and vetted researchers
- •stronger content-moderation processes and crisis protocols
These obligations target operational governance rather than model architecture, but will have direct operational consequences: more comprehensive logging, stronger provenance controls, documented datasets and fine-tuning pipelines, and formalized red-teaming and incident response workflows.
Context and significance
This review is the most concrete sign yet that Brussels intends to apply existing digital-platform rules to generative AI products. The DSA was built to govern large intermediaries; extending it to ChatGPT signals regulators view high-reach LLM deployments as comparable systemic intermediaries. For platform operators and ML teams, the practical impact will land on deployment, monitoring, and documentation practices more than on model weights. For smaller providers, the precedent matters because it clarifies how user-count disclosures and service definitions map to regulatory obligations.
What to watch
The Commission's designation decision and its interpretation of "search" versus "interactive generative service" will set a legal and operational template. Expect follow-on scrutiny of other major LLM providers, and rapid adjustments in compliance tooling, data governance, and transparency APIs as companies prepare for DSA-style oversight.
Scoring Rationale
This is a notable regulatory development with direct operational implications for practitioners: it can change compliance, deployment, and transparency requirements for major LLM services. It is not yet final, but the potential designation is material and likely to shape vendor and in-house AI governance.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

