Google Cloud begins selling TPUs to select customers

Google Cloud will begin selling its custom TPUs to a select group of customers for installation in their own data centers, Alphabet CEO Sundar Pichai said on the Q1 2026 earnings call, as reported by Yahoo and The Register. The company unveiled eighth-generation TPUs, including a TPU 8t for training and TPU 8i for inference, at Google Cloud Next, per IBD. CFO Anat Ashkenazi told investors that Google will record some revenue from TPU hardware sales this year, with a larger balance-sheet impact expected in 2027 and that such revenues will "fluctuate from quarter to quarter," according to The Register. The announcement follows multi-gigawatt TPU deals with customers including Anthropic, and enters Google more directly into competition with Nvidia and other cloud providers, per Yahoo and The Register.
What happened
Google Cloud will begin selling its custom TPUs to a select group of customers to run in those customers' own data centers, Alphabet CEO Sundar Pichai said on the Q1 2026 earnings call, as reported by Yahoo and The Register. The company unveiled its eighth-generation TPUs at Google Cloud Next, including a TPU 8t aimed at training and a TPU 8i aimed at inferencing, according to IBD. The Register reported that CFO Anat Ashkenazi said Google will record some TPU hardware revenue this year, with a more marked balance-sheet impact in 2027, and warned that "It is important to keep in mind that revenues from TPU hardware sales will fluctuate from quarter to quarter depending on when TPUs are shipped to customers."
Technical details
Per public reporting around Google Cloud Next, the eighth-generation TPUs were presented as successors to prior general-purpose accelerators used for both training and inference workloads, with distinct 8t and 8i variants described by IBD. Yahoo and The Register reported a previously announced multi-gigawatt TPU agreement with Anthropic, with chips expected to begin coming online in 2027, and reporting shows Google has also signed large chip agreements with other hyperscalers and enterprise customers. The Register additionally reported Google Cloud Q1 figures: revenue just over $20 billion, up 63% year over year from $12.26 billion, and a backlog of $460 billion, with capital expenditures for the quarter at $35.7 billion concentrated on technical infrastructure.
Industry context
Industry reporting frames the move as a step that puts Google into closer hardware competition with Nvidia and with cloud providers exploring third-party sales of custom silicon, such as Amazon Web Services, which public comments and filings have suggested may also sell home-grown chips to external customers (reporting summarized by Yahoo and The Register). Reporting by Yahoo notes that Nvidia has downplayed the threat from cloud-provider silicon, arguing its chips provide broader flexibility for AI developers.
Editorial analysis
Companies offering their custom accelerators to external customers can expand addressable markets beyond hosted cloud capacity, while increasing engineering and supply-chain complexity. Observed patterns in similar transitions include the need to support diverse integration environments, provide firmware and software stacks for third-party datacenter deployments, and manage lumpy, contract-driven revenue recognition.
What to watch
Public signals to track include customer lists and deployment timelines for externally delivered TPUs, the cadence of reported hardware revenue in Alphabet filings, and performance and software compatibility details that will determine how easily enterprise AI teams can integrate TPUs into existing stacks. Observers should also monitor competing announcements from AWS and Nvidia, plus any follow-on pricing or support commitments announced at Google Cloud events or in investor materials.
For practitioners
The shift from strictly hosted accelerators to sellable hardware changes procurement options for organizations with on-premises or hybrid AI infrastructure needs. Industry experience suggests engineering teams evaluating custom accelerators should benchmark end-to-end stack compatibility, orchestration support, and long-term firmware and software maintenance commitments before committing to large on-prem deployments.
All quoted remarks and numerical figures above are drawn from the Q1 2026 earnings call and reporting by The Register, Yahoo, and IBD, as indicated inline.
Scoring Rationale
The story is a notable infrastructure development: a major cloud provider offering custom AI accelerators for on-prem use affects procurement, competition with Nvidia and AWS, and capacity planning for ML teams. It is not a paradigm shift but materially changes options for enterprise AI infrastructure.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems


