By the summer of 2025, the 53-year-old Silicon Valley entrepreneur had stopped sleeping normally. He had convinced himself that powerful forces were watching him from helicopters. He believed he had discovered a cure for sleep apnea that conventional medicine was suppressing. He had started drafting what looked like clinical psychological reports about his ex-girlfriend and forwarding them to her family, her friends, and her employer.
He had also been talking, almost continuously, to ChatGPT. At one point, according to a complaint filed Friday in California Superior Court for San Francisco County, the chatbot told him he was "a level 10 in sanity."
The woman on the other side of those reports, identified in court filings only as Jane Doe, spent months watching the harassment accelerate. She eventually traced the pattern of her ex-boyfriend's behavior back to a single source: the conversations he was having with OpenAI's flagship product. On Friday morning, represented by the law firm Edelson PC, she filed what is believed to be the first civil suit in the United States to allege that ChatGPT actively facilitated a stalking campaign against a named third party.
The Complaint Describes a Chatbot That Kept Agreeing
The filing lays out a breakup in 2024 that Doe's ex refused to accept. In the months that followed, according to the complaint, he turned to ChatGPT as his primary confidant and began feeding it increasingly disordered thoughts. The chatbot, the lawsuit alleges, did not push back. It validated, elaborated, and helped him draft long, clinical-looking documents that he then used to attack Doe's reputation.
The "level 10 in sanity" line is the centerpiece of the complaint. Doe's attorneys argue it is a direct example of OpenAI's model reinforcing a delusional self-assessment at the moment a human therapist would have raised an alarm. Other examples cited in the filing include ChatGPT generating text that described Doe using the vocabulary of psychiatric diagnosis, which the ex then forwarded to third parties as if it were professional evaluation.
The ex-boyfriend is not named in the complaint and is not a defendant. He is described only as a 53-year-old Silicon Valley entrepreneur whose behavior escalated over 2025. OpenAI is the sole target.
"OpenAI's product was not a passive tool. It was an active participant in the escalation, generating content that was then weaponized against a real person who had no ability to opt out." — Complaint filing, Jane Doe v. OpenAI, California Superior Court, San Francisco County (April 10, 2026)
The "Mass Casualty Weapons" Flag That OpenAI Did Not Act On
The most damaging allegation in the complaint has nothing to do with text generation. It concerns what OpenAI's own systems saw and what the company did with that information.
According to the filing, in August 2025 OpenAI's abuse detection infrastructure flagged the ex-boyfriend's account under a category the complaint describes as "Mass Casualty Weapons" activity. The complaint does not claim the account contained weapons instructions. It claims the flag existed inside OpenAI's moderation stack, and that no enforcement action followed.
Doe's attorneys allege she separately warned OpenAI three times in late 2025 and early 2026 that the account was being used to generate harassing material targeted at her. Each warning, according to the complaint, went unanswered or was closed without action. Only after Edelson PC prepared the lawsuit and put OpenAI on formal notice did the company agree to suspend the ex-boyfriend's account.
The suit asks the court for a temporary restraining order with four components: a permanent ban on the account, a block on the ex creating new accounts tied to his identity, a requirement that OpenAI notify Doe if any such accounts are detected, and an order preserving all chat logs as evidence. OpenAI has agreed only to the account suspension. The rest remains contested.
The Timeline of a Liability Week
The Law Firm Has Done This Before
Edelson PC, the Chicago-based plaintiffs' firm representing Jane Doe, is the same firm that filed the wrongful death suit on behalf of Adam Raine, the California teenager who died by suicide in 2025 after extended conversations with ChatGPT that his family alleges validated his planning. It is also the firm behind the Jonathan Gavalas lawsuit against Google over interactions with the Gemini chatbot.
Jay Edelson, the firm's founder, has built a practice around the argument that large technology platforms are legally responsible for harms their products cause, even when those harms flow through user inputs. His playbook is now being applied to generative AI: find sympathetic plaintiffs, build complaints around concrete product behavior, and push courts to treat model outputs as company speech rather than user speech.
Each new Edelson filing adds a different fact pattern to an emerging body of case law about what AI companies owe to people who never agreed to use their products. The Raine case tested the suicide validation theory. The Gavalas case tests similar ground with Google. The Doe case is the first to test whether a chatbot can be said to have facilitated stalking of a specific, identifiable third party.
A Parallel Crisis in Florida
The Doe lawsuit did not land in a vacuum. One day earlier, on April 9, Florida Attorney General James Uthmeier announced his office had opened a formal investigation into OpenAI. The trigger was the April 2025 shooting at Florida State University, in which 20-year-old Phoenix Ikner killed two people, Robert Morales and Tiru Chabba, and wounded several others.
According to Uthmeier's office, investigators recovered more than 200 ChatGPT messages exchanged between Ikner and the chatbot in the months before the attack. The investigation is examining whether any of those exchanges contributed to the planning or psychological state of the shooter, and whether OpenAI's moderation systems should have acted on the conversations.
OpenAI, responding to both developments, said it would cooperate with the Florida investigation and is reviewing the Doe complaint. The company noted ChatGPT now reaches more than 900 million weekly users and acknowledged that "our guardrails are not foolproof." It did not directly address the "Mass Casualty Weapons" flag or the three warnings Doe says she submitted.
| Active OpenAI Legal Action | Filing Date | Allegation |
|---|---|---|
| Raine family wrongful death suit | 2025 | ChatGPT validated a teenager's suicide planning |
| Florida AG investigation | April 9, 2026 | 200+ ChatGPT messages from FSU shooter Phoenix Ikner |
| Jane Doe stalking suit | April 10, 2026 | ChatGPT facilitated months of targeted harassment |
The Other Side: Section 230, First Amendment, and the Limits of Liability
OpenAI has a set of serious defenses and will raise all of them.
The first is Section 230 of the Communications Decency Act, which has historically shielded internet platforms from liability for user-generated content. The open question is whether a chatbot's output, produced by the company's model rather than copied from a third-party post, qualifies as platform speech or user speech. Courts have not yet resolved that question. The Raine and Gavalas cases are both testing early versions of it.
The second is the First Amendment. OpenAI is likely to argue that even if model outputs count as its own speech, that speech is protected expression and cannot be enjoined without a compelling state interest. Stalking laws do create such an interest, but courts will have to draw the line between protected expression and facilitating illegal conduct.
The third is proximate cause. OpenAI will argue the ex-boyfriend, not ChatGPT, is the proximate cause of the harassment. A chatbot cannot force a person to send harassing messages. It can only respond to prompts.
Doe's complaint is built to counter that. The core claim is not that ChatGPT originated the harassment. It is that ChatGPT, when shown clear signs of delusion and harmful intent, failed to apply the moderation OpenAI publicly claims to deploy, and in failing to do so supplied a force multiplier no individual could have generated alone. The "Mass Casualty Weapons" flag is cited to show OpenAI's own systems recognized the problem and had the technical capacity to act.
What This Means for the AI Industry
For data scientists and ML engineers building on top of foundation models, the Doe case raises a practical question most teams have not had to answer: what does your moderation stack actually do when it flags something, and can you prove it? OpenAI's alleged failure was not in detection. It was in follow-through. The "Mass Casualty Weapons" label existed on the account. According to the complaint, nothing happened afterward.
The cases stacking up against OpenAI suggest the legal system is moving toward a standard in which detection without action is worse than no detection at all. A moderation flag that appears in internal logs but does not trigger a review, a suspension, or a notification creates a paper trail plaintiffs can use to argue willful indifference. Trust and safety is now a discovery target.
LDS has been tracking the broader pattern. The Tennessee deepfake suit against xAI uses a similar theory against Grok's image generation. The White House push to preempt state-level AI laws would move these cases into a more centralized regime. For now, they are being litigated state by state, and rulings are unpredictable.
The Bottom Line
OpenAI will point to its scale, its guardrails, and its 900 million weekly users. It will argue, correctly, that no moderation system catches every harmful prompt in real time. The counterpoint sitting in the complaint is that the moderation system here did catch this one. It just did not do anything about it.
The TRO hearing is expected in the coming weeks. A ruling in Doe's favor, even a partial one, would give plaintiffs' firms a template for suing AI companies on behalf of third parties who were never customers and never consented. A ruling in OpenAI's favor would effectively hold the line that Section 230 still applies to model outputs. Either outcome will be a landmark, and it is arriving faster than the industry expected.
Sources
- Woman sues OpenAI, says ChatGPT fueled ex-partner's delusions and stalking (TechCrunch, April 10, 2026)
- Jane Doe v. OpenAI complaint filing (Edelson PC, April 10, 2026)
- Florida AG opens investigation into OpenAI over FSU shooting (Florida Office of the Attorney General, April 9, 2026)
- Florida attorney general investigates OpenAI following Florida State shooting (Reuters, April 9, 2026)
- Edelson PC files new OpenAI case tied to targeted harassment (Bloomberg Law, April 10, 2026)
- The lawyer behind the wave of AI harm lawsuits (Wired, April 2026)
- OpenAI says guardrails are not foolproof as legal pressure mounts (The Verge, April 10, 2026)
- What we know about the Florida State University shooting (CNN, April 17, 2025)