Broadcom Agrees to Produce Google's Future TPUs

Broadcom will manufacture future generations of Google’s tensor processing units (TPUs) and has expanded its partnership around Anthropic, enabling the AI startup to tap roughly 3.5 gigawatts of TPU-based compute capacity beginning in 2027. The arrangement, disclosed in regulatory filings and company statements on April 6–7, 2026, follows Broadcom’s existing role producing Google’s TPUs and a prior 1 GW supply commitment. Anthropic says the expanded capacity supports rapid commercial growth—its annualized revenue exceeded $30 billion and it counts more than 1,000 business clients spending over $1M annually. Market reaction was positive for Broadcom stock. For practitioners, this formalizes a long-term TPU supply chain and signals materially larger TPU deployment for multi‑GW model training and inference workloads.
What happened
Broadcom agreed to produce future versions of Google’s custom tensor processing units and expanded a multi‑party arrangement with Anthropic that gives the startup access to roughly 3.5 gigawatts of TPU-driven compute capacity, with deployment timelines concentrated around 2027. The disclosures were published in filings and company commentary dated April 6–7, 2026.
Technical context
Google designs the TPU architectures; Broadcom is already a manufacturing partner that assembles and supplies those chips and associated networking components at scale. Anthropic’s arrangement leverages Google’s data‑center TPU fabric and Broadcom’s production role to secure multi‑GW capacity—an order of magnitude above the 1 GW supply Broadcom cited as already underway in early 2026. Multi‑GW TPU allocations materially change the economics and feasible scale for training large foundation models and operating high‑throughput inference fleets.
Key details
- •Capacity: ~3.5 GW of TPU‑based compute access for Anthropic, with majority deployment in the U.S. and rollout starting in 2027.
- •Commercial scale: Anthropic reports annualized revenue exceeding $30 billion and over 1,000 enterprise customers paying >$1M annually.
- •Supply chain: Broadcom will produce next‑gen TPUs and is described as a long‑term partner in Google’s TPU roadmap.
- •Market signal: Broadcom shares rose in extended trading after the announcement.
Why practitioners should care
This is a structural move in AI infrastructure. Securing multi‑GW TPU capacity for a single startup validates TPU‑centric architectures as a viable large‑scale alternative to GPU fleets for both training and inference. For ML engineers and infrastructure teams, expect increased availability of TPU‑native tooling, more vendor coordination on custom silicon supply chains, and potentially lower latency or cost profiles for TPU-optimized models. For organizations planning model scale‑ups, the news signals that capacity bottlenecks can be addressed via integrated partnerships between hyperscalers and semiconductor manufacturers.
What to watch
- •Performance and pricing terms for Anthropic’s TPU allocations versus GPU alternatives.
- •Google/Broadcom roadmaps: technical specs and per‑chip performance/power metrics for the next‑gen TPUs.
- •How other AI companies react—whether they pursue similar long‑term manufacturing agreements or diversify to GPUs/other accelerators.
Scoring Rationale
This deal materially affects AI infrastructure supply and capacity—critical for practitioners planning multi‑GW training or large inference fleets. The story is recent and directly relevant to model scaling and procurement strategies.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems
