Skip to content

Elon Musk Called Anthropic "Evil." Yesterday He Handed Them 220,000 GPUs.

DS
LDS Team
Let's Data Science
9 min
Anthropic now controls every chip inside SpaceX's Colossus 1 facility in Memphis, 300 megawatts and 220,000 Nvidia accelerators that used to train Grok. Claude Code's rate limits doubled overnight. Opus API users got a 1,500% input-token boost. The man who once said Anthropic "hates Western Civilization" is now its landlord.

On Wednesday morning, Anthropic's Chief Product Officer Ami Vora walked onto a stage at the Code with Claude conference in San Francisco and confirmed a deal that should not exist.

"We're partnering with SpaceX to use all the capacity of their Colossus One data center," Vora told the room. The Colossus One facility in Memphis, Tennessee, is the supercomputer Elon Musk built to train Grok. It holds more than 220,000 Nvidia GPUs across H100, H200, and the newest GB200 accelerators. Within a month, Anthropic will have access to all of it: roughly 300 megawatts of compute capacity that until very recently powered the model designed to compete with Claude.

A few hours later, Musk posted his own version of events on X. He had spent time with Anthropic's senior team, he wrote, asking how the company keeps Claude aligned with human interests. He came away convinced. "No one set off my evil detector," he said.

This is the same Musk who has called Anthropic "Misanthropic," described it as "evil," and accused it of hating Western civilization. The same Musk whose ongoing $134 billion lawsuit against OpenAI is currently in front of a San Francisco jury. The same Musk whose xAI subsidiary builds the model that Claude is supposed to crush.

Anthropic's Compute Crisis Was Worse Than Anyone Knew

The deal solves a problem Anthropic CEO Dario Amodei finally admitted on stage Wednesday: the company has been catastrophically out of GPUs.

Internal projections at the start of 2026 assumed Anthropic would grow about 10x year-over-year. Actual growth in the first quarter came in at 80x. "That is the reason we have had difficulties with compute," Amodei told the conference audience.

The same growth surge is what justified Anthropic's $50 billion round at a 900 billion dollar valuation that closed two weeks ago, and what makes the SpaceX deal not just opportunistic but necessary.

Eighty times the demand against ten times the planned capacity is the kind of mismatch that turns into product degradation fast. Pro and Max subscribers had been hitting throttles. Claude Code users on Pro plans were running into peak-hour limit reductions that capped their five-hour windows. API customers running Opus agents at scale were rate-limited into uselessness. Developers had spent April publicly complaining about timeouts and dropped sessions on long-running tasks.

The Colossus deal closes the gap in one stroke.

The Practitioner Numbers

The compute does not flow to enterprise customers in the abstract. It flows directly into rate-limit increases that data scientists, ML engineers, and AI-assisted developers will feel inside a week.

ChangeBeforeAfter
Claude Code 5-hour limits (Pro, Max, Team, seat-based Enterprise)Baseline2x baseline
Peak-hours limit reduction on Claude Code (Pro, Max)ActiveRemoved
Opus API Tier 1 max input tokens per minute30,000500,000
Opus API Tier 1 max output tokens per minute8,00080,000

The Tier 1 API jump is the headline number. A 1,500% increase in input throughput and 900% on output makes Opus genuinely usable for production agents that ingest large codebases, full RAG contexts, or long document chains. Anthropic framed the change as a direct fix for the rate-limit pain that paid customers had been raising for weeks.

Pro and Max subscribers, who pay between 20 and 200 dollars a month, had been the loudest complainers and got the loudest fix. The five-hour budget doubled. The peak-hour throttle that compressed it further during US working hours is gone.

For practitioners, the math has changed. A single Claude Code Pro user who could previously complete maybe two long agent runs per session can now run four. Teams that were gating Opus access to senior engineers because of API quotas can open the floodgates.

How Colossus Got to Memphis, and Why Musk Sold It

Colossus 1 is the data center that gave xAI its competitive opening. Built in 122 days in 2024, it hit 100,000 Nvidia H100 GPUs in record time and was Musk's standing rebuttal to anyone who said xAI couldn't catch up to OpenAI and Anthropic on raw compute. By early 2026, expansions had pushed the facility past 220,000 accelerators with Blackwell-generation GB200s coming online.

It also became Musk's biggest political liability. xAI and its subsidiary MZX Tech, LLC, installed dozens of natural-gas-burning turbines on the Memphis site to power the buildout. Local activists in predominantly Black neighborhoods around the facility documented air-quality deterioration, and Memphis residents organized protests over a year of expansion permits granted on what they argued were inadequate environmental reviews.

The financial backdrop matters more for understanding why the deal happened. SpaceX is preparing for an IPO that some analysts have valued north of $1 trillion. xAI's separate fundraising stalled. Earlier this year, Musk dissolved xAI as an independent entity and merged its assets into SpaceX, putting Colossus on the SpaceX balance sheet. A SpaceX whose pre-IPO story includes "we rent compute to whoever pays" is a much cleaner equity narrative than one whose compute exists only to feed a chatbot Musk personally controls.

Renting Colossus to Anthropic is a $300+ megawatt capacity contract that books revenue, neutralizes the political optics of building gas turbines for one customer's chatbot, and lets Musk keep a deal he can hold up to investors as proof that SpaceX is now an AI infrastructure company.

The Orbital Data Center Surprise

The Wednesday announcement contained a second piece that most coverage skipped past. Anthropic and SpaceX said they had also "expressed interest" in jointly developing multiple gigawatts of orbital AI compute capacity.

That phrase has a specific meaning. SpaceX has filed plans with the FCC for a constellation of up to one million satellites, each carrying compute payloads, designed to add roughly 100 gigawatts of orbital AI capacity per year once Starship reaches reliable reusability. The pitch is that solar panels in orbit get continuous direct sunlight, vacuum cooling is free, and the per-kilogram launch cost on Starship makes putting compute in space cheaper than building gigawatt terrestrial sites in the western US.

Anthropic is not committing to launch satellites. The company's involvement is "exploratory." But it now has a seat at the table on what would be the largest infrastructure project in computing history, and its name is on the partnership. As LDS reported in April when SpaceX filed for its IPO, the orbital compute pitch was always the SpaceX endgame. The Anthropic announcement is the first substantive AI customer signing onto it.

The Other Side: Why Some at Anthropic Are Uneasy

Not everyone at Anthropic is celebrating.

The political optics are hard to ignore. Anthropic is taking compute from a Musk-owned facility at the same time Musk's ongoing OpenAI lawsuit is attempting to dismantle the structure of Anthropic's largest competitor, and while Musk has spent the last 18 months publicly attacking Anthropic by name. Several reporters covering the announcement noted that the deal would have been politically unimaginable a year ago.

Critics outside the company point to a different problem: concentration risk. Anthropic has stacked compute commitments with three different infrastructure providers in the past month, summarized below.

PartnerAnthropic compute commitmentAnnounced
Amazon Web Services$100 billion in AWS Trainium capacityApril 21, 2026
Google Cloud$200 billion across five yearsMay 5, 2026
SpaceX (Colossus 1)Full capacity, ~300 megawattsMay 6, 2026

The company is simultaneously dependent on three of the four largest tech infrastructure providers in the world, plus a SpaceX-owned facility that runs on natural gas. The same company that markets itself on AI safety is now structurally entangled with the compute strategies of every major hyperscaler and one billionaire-controlled launch monopoly.

For practitioners, the upside is straightforward: more Claude availability, fewer throttles, faster agents. The longer-term question is what kind of bargaining power Anthropic retains when its compute is split across infrastructure operators that all also compete with it on models.

The Bottom Line

For anyone building on Claude, the deal is unambiguous good news. Rate limits doubled, peak-hour throttles disappeared, and Opus API tiers got the kind of capacity boost that turns a model from "interesting for prototypes" into "viable for production." Within a week, ML engineers will feel it in tooling that no longer hits ceilings, and in agent runs that no longer drop sessions mid-task.

For Musk and Amodei, the deal is something stranger. Two men who have spent two years publicly questioning each other's motives have just bound themselves into the largest single-customer compute contract in history. The contract is what holds up Anthropic's product story for the next 12 months and what cleans up SpaceX's pre-IPO equity narrative. Each one's success now depends on the other not blowing up the relationship in the courts or on X.

Musk's "evil detector" line was a joke, but the substance underneath it is not. Two of the most public adversaries in the AI industry just signed a deal that says, in effect: we will fight everywhere except where the GPUs are.

"We're partnering with SpaceX to use all the capacity of their Colossus One data center." — Ami Vora, Chief Product Officer at Anthropic (Code with Claude SF, May 6, 2026)

Whether that peace lasts the length of the contract is the question that should keep Anthropic's procurement team awake. As one of the company's own researchers put it on background: when your compute partner has spent two years calling your model evil, you should keep reading the fine print.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths