For three years, OpenAI, Anthropic, and Google competed on everything: talent, compute, benchmarks, pricing, government contracts, and public narrative. They cooperated on almost nothing.
That changed on April 6. Bloomberg reported that the three companies had begun sharing information about adversarial distillation attacks through the Frontier Model Forum, an industry nonprofit they co-founded with Microsoft in 2023. The sharing is modeled on how cybersecurity firms exchange threat intelligence: when one company detects an attack pattern, it flags the pattern for the others.
The catalyst was not abstract. It was specific, documented, and expensive.
For context: In March, LDS reported that Chinese AI labs had been caught stealing Claude's intelligence through 24,000 fake accounts. The coalition announced this week is the industry's collective response to that kind of attack.
Anthropic Named the Attackers. OpenAI and Google Confirmed.
In February 2026, Anthropic published a report identifying three Chinese AI laboratories (DeepSeek, Moonshot AI, and MiniMax) that had created approximately 24,000 fraudulent accounts and run more than 16 million exchanges with Claude. The purpose was model distillation: using a powerful model's outputs as training data for a cheaper, smaller model.
The scale was staggering. MiniMax alone accounted for 13 million exchanges, roughly 81% of the total. Moonshot AI generated 3.4 million more. DeepSeek contributed a smaller volume but employed the most technically sophisticated methods, including using Claude to build censorship capabilities for the Chinese government.
Eleven days before Anthropic's report, OpenAI had already sent a formal memo to the U.S. House Select Committee on the Chinese Communist Party. OpenAI warned that DeepSeek had discovered "increasingly sophisticated methods" to extract data from its models, accusing the company of attempting to "free-ride on the capabilities developed by OpenAI and other U.S. frontier labs."
Google's disclosure followed a similar pattern. Google Threat Intelligence Group and Google DeepMind identified and disrupted model extraction activity involving more than 100,000 prompts targeting Gemini's reasoning capabilities. Google framed the threat as global rather than China-specific, noting attacks from researchers and companies worldwide.
Three separate companies. Three separate investigations. The same attackers.
The Playbook Comes from Cybersecurity
The collaboration mirrors a model that the cybersecurity industry built decades ago: Information Sharing and Analysis Centers (ISACs), where companies in the same sector share indicators of compromise without revealing proprietary business information.
Through the Frontier Model Forum, OpenAI, Anthropic, and Google are developing technologies to identify abnormal traffic patterns that signal automated cloning attempts. The detection signals include:
- High-volume organized queries across multiple accounts
- Repeated prompt patterns designed to extract reasoning chains
- Bot-like behavior disguised as normal API usage through proxy networks
- Synchronized timing and shared payment methods across supposedly independent accounts
The response toolkit is equally specific: account cancellation, IP blocking, rate-limiting adjustments, and output format changes designed to degrade the quality of scraped training data. Anthropic has already taken the most aggressive step, banning all Chinese-controlled companies from accessing Claude entirely.
The Trump administration has signaled support and proposed formalizing the effort through a dedicated information-sharing center. But the companies face an awkward legal question: how much competitive information can fierce rivals legally share without triggering antitrust scrutiny?
The Economics That Make Distillation Almost Impossible to Stop
Distillation itself is a standard machine learning technique. A large "teacher" model generates outputs that a smaller "student" model learns to replicate, producing similar capabilities at a fraction of the cost. Companies routinely distill their own models internally.
The problem is what the industry now calls adversarial distillation: external labs hammering U.S. models with automated prompts to clone core behavior and safety research without investing in original training. The student model gets the teacher's capabilities without the teacher's development costs, which run into the tens of billions of dollars for frontier models.
U.S. officials estimate that unauthorized distillation costs Silicon Valley billions of dollars in lost profit each year. The economics make the problem nearly impossible to solve through enforcement alone: Chinese models built on distilled outputs can be offered at prices 14 times cheaper than their U.S. equivalents, according to reporting by Implicator.ai.
The practice first drew serious scrutiny in January 2025, when DeepSeek released its R1 reasoning model. R1 matched the performance of leading American systems at a fraction of the cost, briefly erasing hundreds of billions of dollars from U.S. AI company stock valuations. Microsoft and OpenAI launched immediate investigations into whether DeepSeek had improperly used their models' outputs to build it.
Stanford's Alpaca project had demonstrated years earlier that distillation was technically feasible. DeepSeek proved it could be done at scale, cheaply, and with devastating commercial impact.
The Skeptics See a Losing Battle
Not everyone believes the coalition can work.
The fundamental challenge is economic, not technical. If distilled Chinese models cost 14 times less than their American counterparts, no amount of traffic monitoring or account banning will eliminate the incentive. New proxy networks and fraudulent accounts are cheap to create. Building frontier models from scratch is not.
Legal analysts have pointed out that U.S. law does not clearly protect AI model outputs under copyright, which makes the "theft" framing legally ambiguous. The coalition's critics argue that the companies' real concern is not theft in any traditional sense but price competition from labs that skipped the most expensive part of model development.
There is also the question of precedent. All three companies use distillation internally to create smaller, cheaper versions of their own models. Drawing a bright line between legitimate internal distillation and adversarial external distillation is conceptually clear but legally untested.
The antitrust dimension adds further complication. Sharing attack data is defensible. Sharing anything that could be construed as coordinating pricing, access policies, or competitive strategy would invite regulatory scrutiny. The companies have asked the U.S. government for explicit regulatory clarity on what information they can legally exchange, and so far, that clarity has not arrived.
The Bottom Line
Three companies that built their businesses on outcompeting each other are now sharing intelligence because a common threat proved more dangerous than their rivalry. The Frontier Model Forum collaboration marks the first time OpenAI, Anthropic, and Google have actively pooled defensive resources against an external adversary.
The question is whether the defense can keep pace with the economics. Frontier models cost tens of billions to train. Distilling them costs a fraction of that. Blocking one proxy network does not stop the next one, and the financial incentive to copy rather than build from scratch only grows as U.S. models become more capable.
As OpenAI told Congress: DeepSeek's next model "should be understood in the context of its ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs." The three companies that spent years competing to build those capabilities are now, for the first time, competing together to protect them.
Sources
- OpenAI, Anthropic, Google Unite to Combat Model Copying in China (Bloomberg, April 6, 2026)
- OpenAI, Anthropic, and Google team up against unauthorized Chinese model copying (The Decoder, April 7, 2026)
- OpenAI, Anthropic, Google team up to stop Chinese AI distillation threat (Business Today, April 7, 2026)
- OpenAI, Google, Anthropic Team Up to Block Chinese Scraping (BanklessTimes, April 7, 2026)
- OpenAI, Anthropic, Google Share Attack Data on Distillation (Implicator.ai, April 6, 2026)
- U.S. AI Giants Unite Against Chinese Model Replication, Citing Billions in Losses (BigGo Finance, April 7, 2026)
- Chinese AI Labs Caught Stealing Claude's Intelligence Through 24,000 Fake Accounts (Let's Data Science, March 1, 2026)
- The LiteLLM Backdoor: How a Security Scanner Handed Attackers 95 Million Monthly Downloads (Let's Data Science, March 26, 2026)