OpenAI and Anthropic CFOs Race to Secure Compute Capacity

Observer reports that OpenAI CFO Sarah Friar and Anthropic CFO Krishna Rao, each about two years into their roles, are focused on securing compute capacity, raising capital, and tracking demand for A.I. workloads. The article quotes Friar saying "there's not a lot of compute in 2026" in an interview published May 15, and quotes Rao saying "it is the most important thing in the company" and that he spends 30 percent to 40 percent of his time on compute decisions on the May 13 episode of the Invest Like The Best podcast, according to Observer. Observer also reports OpenAI has more than 900 million weekly active ChatGPT users and that its enterprise sales team is "run ragged" by customer demand. The piece says the companies source capacity from multiple vendors and that OpenAI works with Nvidia and Amazon, while incorporating AMD chips, per Observer.
What happened
Observer reports that OpenAI CFO Sarah Friar and Anthropic CFO Krishna Rao are prioritizing access to compute as their firms scale, focusing on chip supply, capital, and demand forecasting. The article says both executives are roughly two years into their roles, and quotes Friar saying "there's not a lot of compute in 2026" in an interview published May 15. Observer attributes to Rao on the May 13 episode of the Invest Like The Best podcast the remark that "it is the most important thing in the company" and that he spends 30 percent to 40 percent of his time on compute-related decisions. The story reports OpenAI has more than 900 million weekly active ChatGPT users and that its enterprise sales team is "run ragged" by customer demand, per Observer. The article notes both companies use a mix of suppliers and reports that OpenAI works with Nvidia and Amazon and incorporates AMD chips, while Anthropic uses GPUs, TPUs, and Tranium chips, according to Observer.
Editorial analysis - technical context
Companies building large generative models currently face a capital and supply chain problem where compute capacity is the binding constraint. Industry-pattern observations: procurement typically spans spot cloud, committed cloud, and on-prem or co-located hardware to balance cost, latency, and availability. Contract types and lead times create exposure to price volatility, bidding cycles, and vendor allocation policies. For practitioners, that often means engineering and finance teams must coordinate on capacity profiles, utilization targets, and model throughput requirements rather than treating compute as a purely operational line item.
Editorial analysis
Observer frames the CFOs role as expanding beyond traditional finance responsibilities into compute portfolio management. Industry-pattern observations: when compute becomes a major line item, firms commonly shift to longer-term commitments, volume discounts, and diversified vendor relationships to reduce delivery risk. Those shifts raise operational complexity for SRE and MLOps teams, for example by requiring heterogeneous hardware support, cross-vendor tooling, and tighter telemetry on utilization and cost per inference.
For practitioners - what to watch
- •Spot and reserved pricing from major cloud providers and GPU vendors for changes in effective cost of training and inference.
- •Contract cadence and disclosure around long-term capacity commitments reported by public providers or in vendor earnings calls.
- •Engineering signals such as adoption of mixed-precision, model parallelism techniques, or hardware-aware model architectures that reduce per-token compute consumption.
Observer is the sole source for the reported quotes, timing, and the 900 million ChatGPT figure cited above. The article does not provide a public statement from either company explaining procurement strategies beyond the quoted remarks.
Scoring Rationale
Securing compute capacity directly affects cost, latency, and feasibility of large model training and deployment, making this notable for practitioners who manage ML infrastructure and budgets. The story is company-level reporting rather than a technical breakthrough, so it is important but not industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

