Florida Investigates OpenAI Over ChatGPT Safety Risks

What happened: Florida Attorney General James Uthmeier launched a formal investigation into OpenAI and its chatbot ChatGPT, citing alleged ties to criminal activity, risks to minors, and national-security concerns. Uthmeier said the tool has been “linked to criminal behavior,” pointed to reported use in the April 2025 Florida State University shooting, and warned that OpenAI’s data and technologies could be exploited by foreign adversaries. He said subpoenas are forthcoming.
Technical details: The probe focuses on the safety and misuse vectors associated with ChatGPT as deployed in consumer-facing chat products and integrations. Key claims that practitioners should track include: - Allegations that ChatGPT interactions were used to facilitate or assist violent criminal acts, with court documents reportedly showing more than 200 messages exchanged with the accused FSU shooter. - Concerns about ChatGPT producing content tied to child sexual abuse material and guidance that may encourage self-harm among minors. - National-security questions around data access and models’ training or telemetry being exposed to or exploited by state actors, specifically the Chinese Communist Party.
OpenAI’s public response emphasizes ongoing safety work and cooperation with the investigation. The company points to broad adoption — citing 900 million weekly users in recent statements — and says it builds systems to identify intent and reduce harmful outputs. Expect subpoenas to request internal safety testing, moderation logs, training-data provenance, prompt-response records for flagged sessions, and red-teaming outcomes.
Context and significance: This is a state-level regulatory escalation that intersects criminal investigations, child-safety enforcement, and national-security posture. It follows federal scrutiny earlier over child safety and arrives as OpenAI weighs a potential IPO — a timing that increases the probe’s commercial and compliance implications. For practitioners, the probe signals higher legal and operational expectations for: - Documentation of safety evaluations, content-moderation pipelines, and incident response timelines. - Data governance and access controls, especially with respect to international data flows and third-party infrastructure providers. - Retention policies for user interactions and the legal exposure that retention enables when chat logs are subpoenaed.
The action also feeds broader policy momentum: states testing their regulatory reach over foundation models and prompting vendors to harden deployment guardrails, produce evidence of harms analyses, and clarify model-use contracts and permitted use cases.
What to watch: Whether subpoenas demand internal telemetry and training-data artifacts, and how quickly OpenAI can produce defensible evidence of safety testing and mitigation. Also watch for follow-on state probes or federal-level escalation, and any legislative responses in Florida addressing AI harms to minors.
Sources: theverge.com cbsnews.com thehill.com cnbc.com nbcnews.com
Scoring Rationale
A state attorney general probing OpenAI is a notable regulatory escalation with practical consequences for safety engineering, data governance, and corporate compliance — especially ahead of a potential IPO. It's impactful for practitioners but not yet an industry-defining legal precedent.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


