Broadcom Secures Expanded Custom AI Chip Deals
Broadcom strengthened its position in AI infrastructure with multi-gigawatt custom chip commitments from major hyperscalers. The company extended a Meta Platforms agreement covering an initial 1 gigawatt of custom AI chips through 2029, and broadened its partnership with Alphabet with an additional 3.5 gigawatts of TPU-related capacity beginning in 2027. Broadcom already holds roughly $21 billion in TPU orders from Anthropic and has said it is on pace to deliver $100 billion of custom AI chips in fiscal 2027. The wins reinforce Broadcom's networking and switching franchise, including its Tomahawk Ethernet line, by coupling accelerator supply with data center connectivity demand. Executive moves include CEO Hock Tan stepping off Meta's board into an advisory role tied to the custom chip roadmap. For practitioners, the story signals stronger vendor diversification in hyperscaler silicon and materially larger, multi-year demand commitments for custom accelerators.
What happened
Broadcom won expanded, multi-gigawatt custom-AI chip commitments with major hyperscalers, deepening its role in the AI infrastructure stack. The company extended a deal with Meta Platforms that includes an initial 1 gigawatt commitment through 2029, broadened its partnership with Alphabet with an additional 3.5 gigawatts of capacity starting in 2027, and already carries about $21 billion in TPU orders from Anthropic. Broadcom projects delivering $100 billion of custom AI chips in fiscal 2027, and CEO Hock Tan will move from Meta's board to an advisory role supporting the multi-generation chip roadmap.
Technical details
Broadcom is participating across accelerators and networking, which matters for deployment efficiency and total system throughput. Key product and capability notes:
- •MTIA 300, MTIA 400, MTIA 450, MTIA 500: Meta announced four generations of MTIA chips; MTIA 300 is already used for ranking and recommendation training, while the later generations are engineered to handle inference at scale and broader AI workloads.
- •TPU-related commitments: the Alphabet-related expansion ties Broadcom into future iterations of Google-style TPUs, adding 3.5 gigawatts of ordered capacity from 2027.
- •Tomahawk networking synergy: Broadcom's leading Ethernet switching portfolio directly complements accelerator deployments by scaling east-west data flows inside AI clusters.
Context and significance
Hyperscalers are committing to vertically integrated silicon stacks, and Broadcom's multi-gigawatt deals show hyperscalers are diversifying beyond incumbent GPU providers and in-house designs. The combination of large, multi-year capacity commitments and networking leverage gives Broadcom higher revenue visibility and tighter integration between accelerators and DC networking. That matters for system architects and ML infrastructure teams planning capacity, power, and interconnect budgets.
What to watch
Execution risk is the primary open question: converting order commitments into delivered capacity, maintaining margins under scale, and aligning supply-chain and packaging timelines through 2027. Also monitor customer concentration and how Broadcom prices multi-generation roadmaps versus competing GPUs and custom chips.
Scoring Rationale
Large, multi-gigawatt commitments from Meta and Alphabet materially strengthen Broadcom's position in AI infrastructure and increase revenue visibility toward its $100B fiscal 2027 target. The development is important for practitioners planning data center capacity and interconnects, but its ultimate industry impact depends on execution and supply ramp.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



