Skip to content

Tennessee Teens Sue xAI After Grok Generated 4.4 Million Deepfakes

DS
LDS Team
Let's Data Science
10 minAudio
Listen Along
0:00/ 0:00
AI voice
Three teenage girls and their families have filed a federal class-action lawsuit against Elon Musk's xAI, alleging the Grok AI tool was used to produce child sexual abuse material from their real school photos — part of a wave of 4.4 million images Grok generated in nine days.

Jane Doe 1 found out through a stranger. An anonymous account messaged her on Instagram: someone had taken her school yearbook photo, her homecoming pictures, images she'd posted to social media — and used an AI tool to make them sexually explicit. The videos and images were circulating on Discord and Telegram. She was a teenager. She contacted law enforcement.

That call led to an arrest in December 2025. And it eventually led to a federal courthouse in California, where on March 16, 2026, three Tennessee teenage girls — identified only as Jane Doe 1, Jane Doe 2, and Jane Doe 3 to protect their identities — filed a class-action lawsuit against xAI, Elon Musk's AI company. The suit names xAI as the entity responsible for building and releasing the technology that made it all possible.

A Perpetrator, a Tool, and a School Full of Victims

The man arrested in late December 2025 had a "close and friendly relationship" with at least one of the girls, according to the complaint. He had quietly assembled a collection of photographs: yearbook portraits, homecoming dance photos, casual social media posts. Ordinary images. Then he fed them into an AI image generation tool that ran on xAI's Grok algorithm — an unnamed third-party app that licensed Grok's image generation API.

The instructions were straightforward and brutal. Generate nude or sexually explicit versions of the people in these photos. The tool complied.

When investigators searched the man's devices after his December arrest, they found child sexual abuse material created from images of at least 18 underage girls. Jane Doe 2 and Jane Doe 3 only learned they were among the victims when police showed up to tell their families.

"These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool," said Vanessa Baehr-Jones, the lead attorney representing the plaintiffs at Baehr-Jones Law. Her firm filed the suit alongside Lieff Cabraser Heimann & Bernstein, one of the largest plaintiff-side litigation firms in the country.

How Grok's Safety Systems Failed

Grok's image generation capabilities launched in October 2025. The feature included a setting called "Spicy Mode" — marketed as less censored and more edgy — that made generating explicit content as easy as clicking a button. The crisis escalated sharply on December 31, when Musk asked Grok to generate a bikini image of himself — responding "Perfect" when the AI complied — normalizing the trend and triggering a massive wave of imitative requests.

The problem went beyond one bad actor exploiting an obvious gap. Researchers at the Center for Countering Digital Hate analyzed what Grok produced during just 11 days in late December 2025 and early January 2026. They estimated approximately 3 million sexualized images were generated in that window, including roughly 23,000 that appeared to depict children. A separate analysis cited in the New York Times found Grok generated 4.4 million images over nine days total, with 1.8 million classified as sexualized depictions of women.

That's roughly one explicit image of a child every 41 seconds.

The lawsuit alleges that the scale was not accidental. The complaint charges that xAI "deliberately designed Grok to create sexually explicit content" and that it configured the model's default behavior to assume "good intent" when users included words like "teenage" or "girl" in prompts. Every other major AI company — Google, OpenAI, Anthropic — uses industry-standard hash-matching tools to detect and block known child sexual abuse material before generation. The suit alleges xAI adopted none of these safeguards.

Grok also launched without digital watermarks that would identify AI-generated images. When the man on Discord distributed Jane Doe 1's fabricated images, nothing in the file identified them as artificial. The complaint calls this a deliberate choice.

There's a structural layer to the failure as well. Rather than exposing Grok's image generation only through its own platforms, xAI licensed the underlying Grok API to third-party app developers, many of them operating outside the United States. The lawsuit describes this as a calculated move: "In this way, xAI could attempt to outsource the liability of their incredibly dangerous tool," the complaint reads.

Wired documented that Grok's main website — separate from the X-embedded version — could produce even more graphic content than the app on the social media platform. One Wired analysis counted more than 15,000 sexualized AI-generated images in a two-hour window on December 31 alone.

The California Attorney General moved first among U.S. regulators. On January 16, 2026, the AG issued a formal cease and desist to xAI under California's AB 621, a deepfake pornography law that took effect January 1. The letter gave xAI until January 20 to confirm compliance and preserve evidence. xAI had already restricted Grok's image generation to paying subscribers on January 9 — a week before the C&D landed — but critics called it an insufficient response. Lawmakers called it "insulting."

The March 16 complaint brings 13 causes of action. The core claims are negligence and product liability — that xAI built a product it knew could produce child sexual abuse material and released it without adequate protections. The suit also invokes the Trafficking Victims Protection Act, which governs exploitation of minors, and Masha's Law, a federal statute that allows victims of child sexual exploitation material to seek $150,000 in damages per violation.

The potential damages are staggering. If the case achieves class certification and covers even a fraction of the estimated thousands of minors believed to have been victimized by AI-generated content made with Grok's tools, the liability could dwarf anything an AI company has faced before.

The Tennessee case is the third lawsuit filed against xAI over Grok deepfakes. A January 15 suit came from Ashley St. Clair, a political influencer and mother of one of Musk's children, who alleged Grok had generated explicit images of her — including manipulations based on photos from when she was 14. A second class action was filed on January 23 by an anonymous South Carolina woman whose clothed photo posted to X was turned into a revealing image by Grok without her consent, remaining visible for three days before removal. Each case targets a different dimension of the same failure.

The Tennessee complaint zeroes in on the most serious category: content involving children, generated from identifiable real photographs, distributed by a real perpetrator, to real harm.

xAI's Defense and the Section 230 Problem

XAI has not commented publicly on the March lawsuit. On the January deepfake crisis, Musk posted that he was "not aware of any naked underage images generated by Grok. Literally zero." He also announced that Grok would stop generating images of girls in bikinis. Neither statement addressed the specific allegations in the Tennessee complaint.

XAI has signaled it will seek dismissal of the cases under Section 230 of the Communications Decency Act, the 1996 law that generally shields platforms from liability for content their users generate. The argument is that xAI is a platform, not a publisher, and that a user — the arrested perpetrator — is the one who made the requests.

Legal experts say that defense may be difficult to sustain here.

Riana Pfefferkorn, a policy fellow at Stanford's Institute for Human-Centered AI, described the Tennessee lawsuit as "suing xAI on hard mode" because the images were technically generated through a third-party app, not Grok directly. But she noted the Section 230 shield was designed for passive hosting, not for AI systems that actively generate content in response to specific prompts. Courts are increasingly skeptical that the old platform immunity doctrine applies when the platform itself is doing the creating.

California's AB 621 — the same statute the AG invoked in January — was written specifically to address this gap. It defines the generation of non-consensual intimate deepfakes as a legal harm in itself, regardless of who typed the prompt. If California federal courts accept that framing, Section 230 may not apply at all.

Imran Ahmed, CEO of the Center for Countering Digital Hate, was more direct. "Ensuring that your platform isn't an industrial-scale machine for sexual abuse would seem like a no-brainer," he said. Ahmed's organization produced the research documenting the 23,000 apparent child images. His broader critique: "We have no mechanisms for holding accountable" companies that are "incredibly resistant" to taking responsibility.

The case will also force a question about where AI product liability ends. Unlike a social media post that a user writes, an AI deepfake is generated by software that a company designed, trained, and deployed. The act of generation is the product, not a downstream user's speech.

The Tennessee complaint arrived as xAI faces simultaneous investigations across the European Union, the United Kingdom, France, Ireland, India, Malaysia, Indonesia, and Australia. The UK's tech secretary called the Grok deepfake crisis "absolutely appalling." On January 23, 35 U.S. state attorneys general signed a letter expressing "deep concern" to xAI — the same day the South Carolina class action was filed.

The DEFIANCE Act, which creates federal civil liability for non-consensual intimate image abuse, passed the Senate and awaits a House vote. The Take It Down Act, signed by President Trump in May 2025, criminalizes the distribution of such content; its notice-and-removal requirements — platforms must remove flagged material within 48 hours — take effect May 2026. Both laws were accelerating through Congress as the Grok crisis unfolded — in part because of it.

What has not happened: federal prosecutors have not pursued criminal charges under existing statutes. The FTC and DOJ have remained silent. And the Pentagon announced in January that it would integrate Grok into certain systems, even as the deepfake investigations were ongoing.

The Bottom Line

Three teenage girls in Tennessee found out that an AI system had turned their school photos into child sexual abuse material. They didn't ask for accountability from a social media algorithm that failed to flag a post. They're suing the company that built and sold the tool that generated the images.

Vanessa Baehr-Jones, their attorney, put it plainly: "xAI chose to profit off the sexual predation of real people, including children."

Whether that argument survives the Section 230 challenge will shape how AI image generators are held responsible for what they produce. This case — and the others like it — will answer a question the legal system hasn't had to face before: when a machine generates child sexual abuse material, who built the machine?

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths