Skip to content

Anthropic Just Promised Google $200 Billion. That's Five Times What Google Is Paying Anthropic.

DS
LDS Team
Let's Data Science
8 min
On Tuesday, The Information reported the AI lab has signed a five-year, $200 billion compute commitment with Google Cloud. That single deal now represents more than 40 percent of Google's disclosed cloud revenue backlog, and it lands one month after Google said it would invest up to forty billion dollars in Anthropic.

The Information's report posted Tuesday afternoon. The number that broke the news was $200 billion: the amount Anthropic has now committed to spend with Google Cloud over the next five years, beginning in 2027. Reuters confirmed the contours but said it could not independently verify the figure. Engadget, US News, Sherwood News, the Financial Times, and CNBC followed within hours.

Anthropic and Google declined to comment on the record. Yet by Wednesday morning Alphabet's stock had moved on the report, and every major outlet was running its own version of the story, because the math was already public elsewhere and it lined up.

What that math says: a private AI startup, founded five years ago, has just locked in a compute spend on the order of two hundred billion dollars over five years. And the company being paid is also the company that, on April 6, 2026, agreed to invest up to $40 billion in it. The two deals together describe one of the strangest corporate flywheels Silicon Valley has ever produced.

How the Number Got So Big

For most of 2025, Anthropic's compute strategy looked like a hedge. The company was buying AWS Trainium time, NVIDIA GPUs through CoreWeave, and Google TPUs in roughly equal measure, with no single supplier holding more than a third of the inference budget. That posture changed in April.

On April 6, 2026, Anthropic announced what it called an expansion of its Google Cloud relationship. The headline numbers then were access to up to one million TPUs and well over one gigawatt of capacity coming online during 2026. Both companies confirmed the deal was worth tens of billions of dollars. Neither put a finite figure on it. A follow-on commitment with Google and chip designer Broadcom added multi-gigawatt TPU capacity to come online from 2027.

By April 27, Google was telling its own investors a different version of the same story. In a circular structure first reported by Reuters and CNBC, Google would invest up to $40 billion in Anthropic, contingent on Anthropic spending against committed TPU capacity. The investment paid for the chips. The chips ran the model. The model paid for the investment.

Then on Tuesday, May 5, The Information published the size of the compute side of that loop: $200 billion over five years, beginning next year. That is roughly five times the size of Google's investment in Anthropic.

What Google Just Locked In

Anthropic's commitment now represents more than 40 percent of Google's disclosed cloud revenue backlog, the multi-year contracted-but-not-yet-recognized revenue line that hyperscalers report to investors as a forward-demand signal.

According to Financial Times analysis published this week, Anthropic and OpenAI between them account for more than half of the roughly $2 trillion in long-term cloud commitments held across AWS, Azure, GCP, and Oracle. Two AI customers, two trillion dollars in cloud backlog.

Cloud providerAnthropic-OpenAI share of backlog
Amazon Web ServicesSubstantial across both labs (Anthropic Trainium contracts, OpenAI workloads)
Microsoft AzureMajority OpenAI
Google CloudOver 40 percent Anthropic alone (per The Information)
Oracle CloudMajority OpenAI

For Google, the practical effect is that capacity planning at one of the three largest cloud providers in the world is now driven primarily by what Anthropic does with Claude. TPU manufacturing roadmaps, power purchase agreements with utilities, fiber routes between data centers, and custom Broadcom silicon are all sized against Anthropic's roadmap.

Anthropic's Math Has Changed Too

A year ago, two hundred billion dollars would have been an unthinkable commitment for Anthropic. The math is different now.

Anthropic disclosed on April 6, 2026 that its annualized run rate had reached $30 billion, more than triple where the company stood at the end of 2025. The disclosure landed in the same press cycle as the Google-Broadcom TPU expansion and put Anthropic ahead of OpenAI on the same metric for the first time.

DateAnnualized run rate
End of 2025$9 billion
Series G close (February 2026)$14 billion
Latest disclosure (April 6, 2026)$30 billion
Year-end 2026 target$34.5 billion

Bloomberg reported the company is running roughly eight months ahead of its $34.5 billion year-end target. Anthropic is also reviewing preemptive investor offers that would value the company at up to 900 billion dollars.

Server spending follows the same curve. Anthropic projects more than $20 billion in compute outlays this year alone, roughly triple the figure for 2025.

OpenAI, the only frontier lab buying at a comparable pace, is projecting close to $45 billion.

The point of the two-hundred-billion-dollar commitment is that the chips have to be ordered before the revenue exists to pay for them. Foundries, power utilities, and Broadcom cannot turn on multi-gigawatt capacity in less than two to three years. If Anthropic waits until Claude usage justifies a fresh gigawatt of inference, the gigawatt will not exist when needed. The deal pre-pays the supply chain by half a decade.

The Other Side: A Concentration Problem

Not everyone reads the deal as bullish. Several Wall Street analysts cited by Sherwood News and BigGo Finance flagged the same question: a 40-percent customer concentration on a single five-year contract is the kind of risk a hyperscaler typically discloses to public-market investors with a flag attached.

If Anthropic stumbles, slows, or is constrained by regulators, Google's revenue backlog absorbs the impact directly. The Financial Times wrote that the dependency runs both ways: Google needs Anthropic's revenue to justify the chip and power capex, and Anthropic needs Google's TPUs to keep training. Either side cannot easily replace the other on the timelines in the contract.

There is also a regulatory question. Google holds an equity position of up to forty billion dollars in Anthropic on top of being its largest compute supplier. Antitrust commentators on the Financial Times, The Decoder, and Cybersecurity Dive have raised the question of whether the combined investment-and-supplier posture should be treated as a vertical relationship for review purposes. No formal investigation has been announced.

A third concern is reliability. Anthropic still trains and serves Claude across three chip families: Google TPUs, Amazon Trainium, and NVIDIA GPUs. The April 21 disclosure of a separate $25 billion Amazon investment paid back through Trainium consumption suggests that even after the Google commitment, Anthropic intends to keep diversifying. Whether the diversification holds in practice once Google TPUs become a 40-percent share of inference is a question only future filings will answer.

What Practitioners Should Watch

For data scientists and engineers who depend on the Claude API, the new line item changes very little in the immediate term. Claude pricing on the public API has not moved since the Series G closed in February. Anthropic has signaled no immediate change to enterprise rates, which continue to scale with volume rather than with the underlying chip cost.

The longer-term reads are sharper.

Capacity bottlenecks should ease starting in 2027. The Broadcom-built TPU shipments scheduled for that timeframe are sized to support multi-gigawatt inference, which means rate-limit problems on Claude during peak hours have a known fix on the way. Open-weight competitors get further away. A two-trillion-dollar cloud backlog dominated by two closed-model labs is a structural moat that Mistral, Cohere, DeepSeek, and the open-weight ecosystem cannot fund out of venture capital alone. And the Google-Anthropic relationship is now load-bearing for both companies. Any product change on Google's side, whether to Vertex AI, Gemini's roadmap, or TPU pricing, can no longer be made without reference to how it affects Claude.

The biggest near-term constraint on AI workloads at most non-hyperscaler shops is reserved TPU and GPU capacity. With the largest single buyer of TPU now contractually locked in through 2032, that constraint will not ease substantially before 2027.

The Bottom Line

In April, Anthropic agreed to spend tens of billions with Google. In May, the figure became $200 billion. The escalation took less than thirty days. Most of the difference is what happens when a five-year commitment is denominated in 2027 prices for chips and power that do not yet exist.

What the deal really records is a structural fact about the AI buildout. The compute side of the industry has consolidated into roughly four buyers: Anthropic, OpenAI, Meta, and a small constellation of sovereign-backed labs. It serves three sellers: NVIDIA, Google's TPU ecosystem, and AWS Trainium. Within that small set, two relationships now move the rest of the market. Anthropic-Google is one of them.

Anthropic Tripled Its Revenue in 90 Days. Then It Signed for 3.5 Gigawatts explained how the compute side ran ahead of the revenue side. The two-hundred-billion-dollar commitment is what happens when the revenue side starts catching up.

The deal begins next year.

Sources

Practice with real Ad Tech data

90 SQL & Python problems · 15 industry datasets

250 free problems · No credit card

See all Ad Tech problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths