Skip to content

The White House Calls Anthropic a Supply Chain Risk. It Also Won't Let Anthropic Share Mythos With Anyone Else.

DS
LDS Team
Let's Data Science
9 min
Anthropic asked the Trump administration to approve adding 70 more companies to its Project Glasswing partner list. The White House said no, citing security concerns and a worry that the National Security Agency would lose its share of Mythos compute. Anthropic is still officially blacklisted as a Pentagon supply chain risk.

On Wednesday night, April 29, an administration official told the Wall Street Journal that the White House had rejected Anthropic's plan to widen access to its most powerful unreleased model. The proposal would have lifted the number of organizations cleared to use Claude Mythos Preview from roughly 50 to about 120, adding 70 additional companies to the Project Glasswing partner list.

The administration's stated reasons were security and compute. Mythos can autonomously discover and exploit zero-day vulnerabilities in major operating systems and browsers, and the worry was that 70 more clearances would multiply the chance of a leak. The second concern was more practical: Anthropic does not have enough computing capacity to serve 120 organizations at once without degrading the share already extended to the federal government, including the National Security Agency.

The same week, in the same town, the same White House continued to maintain that Anthropic itself is a supply chain risk to the Department of Defense. The label was applied in February after CEO Dario Amodei refused to allow Claude to be used for autonomous weapons targeting or the mass surveillance of US citizens. Defense contractors were then told to sever their commercial relationships with the company. Three months later, Anthropic remains blacklisted while the White House fights to keep its most powerful model on a leash held by the same administration.

That is the contradiction at the center of the Mythos story. The Trump administration believes Anthropic is too dangerous to do business with and too dangerous to lose control of. Both at the same time.

How the 50-Partner List Was Drawn

Mythos was not announced like a normal frontier model. There was no API. No price page. No public benchmark leaderboard. Anthropic published a research-style preview on April 8, 2026, said the model was "too dangerous for general release," and named the 50 organizations that would have access through an initiative called Project Glasswing. The published list included Amazon Web Services, Apple, Google, Microsoft, Nvidia, Palo Alto Networks, CrowdStrike, Broadcom, Cisco, JPMorgan Chase, and the Linux Foundation. Roughly 40 other companies and a handful of open source projects rounded it out.

Anthropic's own justification, restated in the Project Glasswing announcement, was that defenders needed a head start. If Mythos-class models could find zero-days at scale, the company argued, then the people responsible for patching critical infrastructure had to get there before the people exploiting it. The 50-partner list represented Anthropic's bet on which organizations could turn discovery into mitigation fast enough to matter.

The bet started cracking almost immediately. On April 22, a Discord group reportedly guessed the URL where Mythos was being served to early partners and got in for several hours before the access path was revoked. Anthropic confirmed the unauthorized access and said it was investigating. The proposal to expand the partner list to 120 was already in motion when the leak happened. Adding 70 more clearances on top of a model that had already escaped its container was a hard sell.

What the Administration Said

The Wall Street Journal published the White House position on April 30. Bloomberg, Reuters, France 24, and Sherwood News followed with the same two-part objection.

The first part was security. Officials told the paper they did not believe Anthropic's proposed access controls were strong enough to prevent leakage at 120 organizations. The Discord incident gave them a precedent. Inside the administration, the model was already being treated as something closer to a controlled cryptography product than a software service.

The second part was compute. According to the WSJ, an administration source said Mythos would consume too many of Anthropic's available GPUs to serve 120 partners and the federal government's existing workloads at the same time. The NSA was specifically named. Internally, sources said Mythos was being tested against Microsoft software for vulnerability discovery. A widening of the partner pool would mean less capacity for that work.

Anthropic told the WSJ it was "having productive conversations with the government about rolling out access to Mythos to more companies and organizations." The company has not publicly named which 70 organizations it wanted to add or what use cases motivated the expansion request.

The Sacks Tweet and the "Boy Who Cried Wolf" Problem

The administration's most quoted public position came not from the West Wing but from White House AI advisor David Sacks, who wrote on X earlier in April: "A growing number of people are wondering if Anthropic is the AI industry's 'boy who cried wolf.' If Mythos-related threats don't materialize, the company will have a serious credibility problem."

That framing matters because it is the public face of an internal split. Department of Defense CTO Emil Michael told CNBC on May 3 that Mythos was "a separate national security moment," language that signaled the model is treated inside the Pentagon as more important than the company that built it. The administration is willing to use Anthropic's technology while continuing to treat the company itself as a national security liability.

The split is not theoretical. The same week the supply chain risk designation was reaffirmed, the White House began drafting an executive action that would let federal agencies onboard Anthropic models, including Mythos, regardless of the Pentagon's blacklist. The administration is preparing to override its own designation with an executive workaround rather than reverse the underlying decision.

What Practitioners Should Take From This

For data scientists and ML engineers, the immediate question is when, if ever, Mythos becomes available outside the partner list. The honest answer is not soon. Anthropic has not committed to a public release date and has explicitly said the model is being held back because it lowers the floor on offensive cybersecurity capability. The April 30 White House position freezes the partner list at 50 for the foreseeable future.

Mythos access todayStatus
Project Glasswing launch partnersActive (~50 organizations)
Federal government, NSA, PentagonActive (separate channel)
Proposed 70-company expansionBlocked by White House on April 30
Public API or weightsNo timeline
Open-weight equivalentNone announced

The second-order effect is that the kind of work Mythos does, autonomous vulnerability discovery, is going to happen anyway. Anthropic's research has not been kept secret. The capability has been demonstrated. Other labs that follow DeepSeek's open-weight playbook are already working on offensive-security tuning, and the same security firms that flagged the LiteLLM backdoor are already preparing for the moment when a Mythos-equivalent capability is available without an NDA.

For now, defenders inside the partner list have a window. Defenders outside it do not.

The Other Side of the Argument

Not every researcher is sold on the gating strategy. Cryptographer Bruce Schneier, writing on his blog on April 13, called Anthropic's announcement "very much a PR play" and pointed to a finding from the security firm Aisle that older and cheaper public models were able to replicate the vulnerabilities Mythos identified. Schneier's argument is that the gating buys meaningful but bounded safety: the gap is "finding for the purposes of fixing is easier for an AI than finding plus exploiting," and that defender advantage shrinks as more powerful models become available to the public.

A counter-argument from inside Anthropic, repeated in interviews and the original Glasswing post, is that even a six-to-twelve-month head start on patching critical infrastructure changes outcomes at scale. The company points to the example of MS17-010, the Windows SMB vulnerability that became EternalBlue, and argues that earlier discovery in the hands of defenders would have prevented WannaCry. Whether Mythos changes that math depends on whether the partner list is broad enough to actually patch the critical software stack. Today, it is not.

The most sympathetic read of the White House position is that the administration is simply applying the same logic Anthropic itself used to justify the 50-partner cap, just one rung tighter. If Mythos is too dangerous for general release, then 50 organizations may already be too many. Adding 70 more does not pass the same test the company applied to itself.

The least sympathetic read is the one Sherwood News flagged: the administration objects to expanding access while simultaneously preparing to expand its own access. The same model that is too dangerous for 120 commercial partners is being deemed safe enough for arbitrary federal agencies through a parallel executive order.

The shape of the conflict

Anthropic believes Mythos belongs in the hands of more defenders. The White House believes it belongs in fewer hands than Anthropic proposes, but more than zero, and specifically including its own. Both sides are arguing about how widely to distribute a model neither side wants the public to use.

The Bottom Line

The Mythos saga has produced a strange equilibrium. Anthropic built the most powerful offensive-security AI ever publicly described, then declined to release it. The Trump administration declared Anthropic a national security risk, then declined to let anyone else have the model. Both parties are deeply uncomfortable with the other, and yet the model continues to run, the federal government continues to use it, and the leak in April keeps surfacing in every conversation about why 70 more clearances would be a bad idea.

Sacks may be right that Anthropic has a credibility problem coming if Mythos-related threats do not materialize publicly. Anthropic may be right that the threats will not materialize publicly precisely because the partner list, however imperfect, is doing its job. The data point that decides the argument has not happened yet.

What has happened is that the White House has demonstrated, in a single week, that it can simultaneously brand a frontier AI lab a security threat and lobby that same lab to share its most powerful technology more narrowly. As Sacks himself wrote: "If Mythos-related threats don't materialize, the company will have a serious credibility problem." The same line, with one word changed, could be turned around. If the threats do materialize, and the partner list does not include the people who needed it, the credibility problem will belong to the administration that froze it at 50.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths