Amazon's custom chips reach a $20B run rate

Amazon's custom silicon portfolio, including Graviton CPUs, Trainium AI accelerators and Nitro DPUs, has surpassed a $20 billion annual revenue run rate, CEO Andy Jassy said on the company's Q1 2026 earnings call, per The Register and Convergedigest. Jassy said, "If our chips business was a standalone business and sold chips produced this year to AWS and other third parties as other leading chip companies do, our annual revenue run rate would be $50 billion," The Register reporting his remark. The company disclosed multi-gigawatt capacity commitments from frontline AI labs, including roughly 2 GW from OpenAI and up to 5 GW from Anthropic, and The Register reported that Trainium2 supply is largely allocated while Trainium3 has begun shipping. Convergedigest also reported AWS revenue of $37.6 billion, up 28% year-over-year, and that Amazon deployed over 2.1 million AI chips in the past 12 months.
What happened
Amazon disclosed that its custom silicon business, comprising Graviton processors, Trainium AI training accelerators and Nitro data-plane units, has exceeded a $20 billion annual revenue run rate, CEO Andy Jassy said on the company's Q1 2026 earnings call, as reported by The Register and Convergedigest. On that call Jassy said, "If our chips business was a standalone business and sold chips produced this year to AWS and other third parties as other leading chip companies do, our annual revenue run rate would be $50 billion," The Register reports. The company reported multi-gigawatt commitments for Trainium, including approximately 2 GW from OpenAI and up to 5 GW from Anthropic; The Register reports Trainium2 capacity is largely sold out and that Trainium3 has started shipping.
Technical details
Per the earnings call coverage in The Register and the Convergedigest summary, Jassy characterized Trainium2 as delivering about 30% better price-performance than comparable GPUs, and cited large multi-year, multi-gigawatt training commitments from OpenAI and Anthropic. Convergedigest reported that Amazon deployed over 2.1 million AI chips in the past 12 months and plans to roll out more than 1 million NVIDIA GPUs starting in 2026. Convergedigest also noted platform-level growth: Amazon Bedrock customer spend rose 170% quarter-over-quarter, and AWS revenue was $37.6 billion, up 28% year-over-year, with AWS operating income at $14.2 billion, per the Convergedigest report.
Industry context
Editorial analysis: Companies operating at hyperscaler scale and integrating silicon, systems and cloud services have the capacity to shift market economics for AI compute. Public reporting frames Amazon's disclosure as evidence that a cloud provider can aggregate large internal demand and secure multi-gigawatt commitments from major model developers, changing how capacity is booked and how suppliers price large-scale training runs. Observers have noted that vertically integrated offerings can compress total cost of ownership for customers who commit at scale, and the reported Trainium commitments from OpenAI and Anthropic are material examples of that dynamic.
Business implications
Editorial analysis: For the AI infrastructure market, a $20 billion run rate within Amazon's custom silicon portfolio, combined with reported multi-gigawatt deals, implies stronger competitive pressure on incumbent GPU suppliers and an enlarged addressable market for hyperscaler-designed accelerators. Reported statements that Trainium2 is largely allocated and Trainium3 is shipping indicate supply tightness in AWS-managed silicon offerings; Dealroom and other reporting also flagged that Amazon may consider selling Trainium externally, which coverage frames as a potential expansion of addressable customers (Dealroom). These are reporting observations, not company-committed plans.
What to watch
Editorial analysis: Observers should track:
- •whether Amazon follows through on any external Trainium sales beyond cloud consumption, noting that Dealroom reports the company "may sell Trainium AI chips to third parties,"
- •the pace at which reported commitments from OpenAI and Anthropic ramp into production workloads, noting that OpenAI's commitment is reported to ramp in 2027
- •how the planned NVIDIA GPU deployments announced in Convergedigest interact with Trainium availability and customer demand. Additional signals include Amazon's disclosed capital expenditure plans and any formal third-party channel or OEM agreements that would enable Trainium outside AWS
Bottom line
Editorial analysis: Reported figures make Amazon's custom silicon a material, strategic component of its cloud business and of the broader AI compute market. For practitioners, the structural takeaway in public coverage is that hyperscalers are increasingly treating custom silicon and long-term capacity commitments as levers for both cost and supply control; teams planning large training runs should watch contract terms, availability windows and the evolving balance between accelerator types.
Scoring Rationale
The story reports a major hyperscaler-scale milestone, a **$20 billion** run rate and multi-gigawatt commitments from leading labs, which materially affects AI compute supply dynamics. Fresh earnings-call disclosures and deployment figures make this notable for practitioners designing large-scale training and infra strategies.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

