Skip to content

A Year Ago, 9% of Businesses Paid for Anthropic. In April It Passed OpenAI.

DS
LDS Team
Let's Data Science
7 min
Ramp's AI Index shows Anthropic reached 34.4% of business buyers in April, edging past OpenAI's 32.3% for the first time. Anthropic quadrupled its business adoption in twelve months. The Ramp economist who published the data used the same post to lay out three reasons the lead may not hold.

On Wednesday morning, Ara Kharazian published a number he had been tracking for a year. Anthropic, the maker of Claude, had passed OpenAI in business adoption for the first time. Kharazian is the lead economist at Ramp, the expense-management company whose monthly AI Index has become one of the few public windows into what companies actually pay for.

He called the result a "stunning reversal in the competitive market dynamics for AI model providers."

Then he spent the rest of the post explaining why he is still bearish on Anthropic.

The data is not ambiguous. In April, 34.4% of businesses on Ramp's platform paid for Anthropic, against 32.3% for OpenAI. A year earlier the gap ran the other way, and it ran wide. In May 2025, just 9% of businesses paid for Anthropic. The last twelve months are the story of that 9% becoming a lead.

The Climb Was Not Gradual

Over the past year, Anthropic quadrupled its business adoption. OpenAI, over the same stretch, grew its business adoption by 0.3%. Overall AI adoption among businesses on Ramp rose to 50.6%, which means the market kept expanding while OpenAI's share of it stalled.

Kharazian's read on how Anthropic did it is specific. "What Anthropic did worked really well," he told TechCrunch. The playbook, he said, was to "start with a very technical customer base, focus on their needs, really succeed in execution and then start broadening out through tools like Cowork." Anthropic had already taken the lead among heavy AI buyers in finance, technology, and professional services. The shift this month came from everywhere else, the broader population of companies where OpenAI's lead had been narrowing for months.

MetricAnthropicOpenAI
Share of business buyers, April 202634.4%32.3%
Change during April+3.8%-2.9%
Business-adoption growth, trailing 12 monthsRoughly 4x+0.3%
Share of business buyers, May 20259%Held the lead

The trajectory matters more than the single month. A one-month, two-point lead is not a structural advantage. A year of compounding monthly gains, against a competitor that barely moved, is a pattern.

What the Number Actually Measures

The Ramp AI Index is built from spend data, specifically corporate card and invoice payments made by the 50,000-plus companies that use Ramp. It counts whether a business is paying for a model provider, not how much that business gets from it.

That design has a known blind spot, and OpenAI pointed straight at it. "We are driving enterprise transformation at scale," an OpenAI spokesperson told Axios. "These are not engagements where customers pay with a credit card." Large enterprise contracts, negotiated and invoiced directly, are exactly the deals a card-spend dataset undercounts. OpenAI has also said it is on pace to out-earn Anthropic in total revenue this year.

Both things can be true. Anthropic can be winning the count of paying businesses while OpenAI still collects more dollars from a smaller number of very large accounts. The Ramp number is a measure of breadth, not depth.

A separate dataset points the same direction on breadth. On OpenRouter's developer leaderboard, which samples model usage rather than payments, OpenAI last ranked above Anthropic in December 2025.

The Economist's Three Warnings

The most useful part of Kharazian's post was not the headline. It was the section where he argued against it. He laid out three headwinds, and two of them are specific to Anthropic.

The first is incentives. Anthropic earns more when businesses consume more tokens, which gives it a reason to steer users toward larger, pricier models even when a cheaper one would be faster and good enough. Kharazian noted that Uber's CTO has said the company already burned through its entire 2026 AI budget. Cost-conscious firms that learn to route work to cheaper models are a direct threat to that revenue model. The same logic applies to OpenAI, but the next two points do not.

The second is reliability. Over recent weeks, Claude users have reported outages, rate limits, and slipping output quality. Anthropic moved quickly: it reset usage limits for all users in April and signed a compute deal with SpaceX to ease the capacity crunch. The complaints still happened.

The third is a self-inflicted cost problem. Kharazian's Econ Lab colleague Rafael Hajjar found that a recent Anthropic model update would triple token costs for any prompt that includes an image. For a company already fighting cost and compute constraints, Kharazian wrote, that roadmap choice is hard to explain.

The Cheap-Model Threat Is the One to Watch

For data scientists and ML engineers, the headwind that matters most is the one about price, because it is already visible in the same Ramp data. Last month, some of the fastest-growing vendors on Ramp's platform were AI inference providers that resell cheap, open-source models.

The switching cost between frontier models keeps falling. OpenAI's Codex does much of the same agentic coding work as Claude Code, often more cheaply, and moving a workflow from one to the other is closer to a configuration change than a migration. Open-weight releases like DeepSeek V4 have made "good enough at a fraction of the price" a real procurement option rather than a hypothetical.

That is the uncomfortable subtext of Anthropic's win. The same enterprises that adopted Claude for its quality are now building the muscle to measure exactly what that quality costs them per task. Some have started formally tracking how much their engineers use AI and what it returns. Adoption got Anthropic the lead. Cost discipline is what will test whether it keeps it.

The Bottom Line

Anthropic spent a year turning a 9% sliver of the business market into a lead over the company that defined the category. The breadth of the shift, across industries and company sizes, is not a fluke.

But the lead is two points wide, one month old, and built on a dataset that admits it cannot see the largest enterprise contracts. The same report that announced the milestone listed the ways it could come undone: token incentives that reward upselling, reliability complaints, and a pricing trajectory moving the wrong way while cheaper models improve every quarter.

Anthropic is heading toward a possible IPO on the strength of numbers exactly like this one. The metric flipped in April. Whether the customer loyalty it is supposed to predict flipped with it is a different question, and one month of data cannot answer it.

Sources

Practice with real Ad Tech data

90 SQL & Python problems · 15 industry datasets

250 free problems · No credit card

See all Ad Tech problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths