Skip to content

Altman Called the Pentagon Deal "Sloppy." 1.5 Million Users Had Already Left.

DS
LDS Team
Let's Data Science
12 minAudio · 3 listens
Listen Along
0:00/ 0:00
AI voice
Sam Altman admitted OpenAI's Pentagon contract was "opportunistic and sloppy" and renegotiated its terms. The QuitGPT movement reached 1.5 million participants anyway. Then OpenAI's own robotics lead resigned.

On March 3, 2026, Sam Altman sat down to explain himself.

Three days after OpenAI signed a classified military deal with the Pentagon, the company's CEO posted a statement on X admitting the rollout was rushed. "We were genuinely trying to de-escalate things and avoid a much worse outcome," he wrote, "but I think it just looked opportunistic and sloppy."

It was one of the fastest public admissions of fault from a major tech CEO in recent memory. It was also, by most measures, too late.

For context: The QuitGPT movement began in late February 2026 after OpenAI signed a deal to deploy its models on the Pentagon's classified network, just hours after Anthropic CEO Dario Amodei publicly refused the same contract. ChatGPT uninstalls surged 295% in a single day. Claude hit the top of the App Store. Our earlier coverage explains how it started.

How the Movement Reached 1.5 Million

By March 3, the day Altman published his mea culpa, QuitGPT organizers reported 1.5 million participants. The count included users who had canceled paid subscriptions, deleted the app, or signed the boycott pledge at quitgpt.org.

Some outlets cited even higher figures. The Insane App and Sovereign Magazine reported the number climbing past 2.5 million across the first two weeks of March, counting social media commitments alongside subscription cancellations. Those figures are harder to verify independently, but the core data point — 1.5 million confirmed actions by early March — appeared across multiple sources including BusinessToday and Euronews.

The surge was driven by a simple, devastating contrast. Anthropic's Dario Amodei had walked away from what was reportedly a $200 million contract, stating he "cannot in good conscience accede" to unrestricted military access. OpenAI signed an almost identical deal hours later.

That symmetry gave the boycott a clean moral argument: one AI CEO said no on principle; the other said yes for the money.

Altman Renegotiated. Critics Said It Wasn't Enough.

Facing the exodus, OpenAI moved quickly. By March 3, Altman confirmed the company was renegotiating the Pentagon deal to add explicit safeguards.

The revised contract, reported by Axios, added language barring OpenAI's technology from being used for domestic surveillance of U.S. persons. It also explicitly excluded the NSA and other intelligence agencies from accessing OpenAI services under the agreement without separate contract modifications.

Altman's public statement was direct. He said the company "shouldn't have rushed" the announcement and that the original timing — released on a Saturday, the day after Anthropic was sanctioned on Friday — made OpenAI look like it was opportunistically filling a vacuum left by a punished competitor.

The Electronic Frontier Foundation, a digital rights nonprofit, was unmoved. In a March 2026 analysis titled "Weasel Words," the EFF argued the revised language was insufficient. "Secret agreements and technical assurances have never been enough to rein in surveillance agencies," the EFF wrote. "They are no substitute for strong, enforceable legal limits and transparency."

The EFF's specific concern was the phrase "consistent with applicable laws," which it argued the government had historically interpreted expansively, not as a genuine ban on surveillance.

The Insider Defection

On March 7, Caitlin Kalinowski, OpenAI's robotics hardware lead, resigned.

Kalinowski's departure was the most significant internal break over the Pentagon deal. In her public statement, she wrote that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."

Her specific complaint echoed Altman's own admission: the deal was rushed. But where Altman framed the problem as optics, Kalinowski framed it as process. The guardrails, she argued, had not been defined before the contract was signed.

TechCrunch, Bloomberg, NPR, and Fortune all covered the resignation. It reframed the story from a consumer boycott into an employee accountability question.

How It Unfolded

FEB 28, 2026
OpenAI signs Pentagon deal
Hours after Anthropic was blacklisted for refusing, OpenAI deploys models to classified Pentagon networks. ChatGPT uninstalls jump 295% in 24 hours.
MAR 3, 2026
Altman admits "opportunistic and sloppy" — renegotiates deal
CEO publishes statement calling the announcement rushed. Revised contract adds domestic surveillance ban. QuitGPT hits 1.5 million participants the same day.
MAR 4, 2026
Protest outside OpenAI HQ — and a skeptical headline
Around 50 protesters gather with chalk messages and signs reading "Sam Altman is watching you." The SF Standard publishes: "'QuitGPT' is more of a meme than a movement."
MAR 7, 2026
Caitlin Kalinowski resigns — the most senior internal break
OpenAI's robotics hardware lead quits, citing "surveillance without oversight and lethal autonomy without authorization." The dispute moves from streets to boardrooms.
MAR 13, 2026
Anthropic doubles Claude usage limits
Anthropic launches a promotion doubling off-peak usage for all plans through March 27. It reads as a direct play for still-churning ChatGPT subscribers.
MAR 19, 2026
The Register: Claude is closing the market share gap
Anthropic's business subscription share grew 4.9% month-over-month in February. OpenAI's fell 1.5%. Ramp data shows Claude at 24.4% of business subscription share, with new customers choosing Anthropic 70% of the time.
MAR 21, 2026
Stop the AI Race march: nearly 200 protesters, three company HQs
Organized by stoptherace.ai, the march runs from Anthropic to OpenAI to xAI across San Francisco, demanding Amodei, Altman, and Musk pause frontier AI development.

Anthropic's Numbers Told the Story

While the protests played out on the streets, the business data was unambiguous.

Anthropic's annualized revenue reached $19 billion in March 2026, up from $14 billion in February. That $5 billion monthly jump is, by any measure, extraordinary. The company had been adding over one million new daily signups for weeks.

The Register reported on March 19 that Claude's business subscription share had grown to 24.4%, compared to OpenAI's 34.4%. Ramp, an AI financial analytics firm, found that nearly one in four businesses using its platform now paid for Anthropic, up from one in 25 twelve months ago. Businesses selecting AI tools for the first time chose Anthropic approximately 70%** of the time**.

Business subscription data showed Claude gaining 4.9% month-over-month in February while OpenAI lost 1.5%. The direction of travel was clear.

On March 13, Anthropic made a move that looked calculated. It launched a promotion doubling Claude's usage limits during off-peak hours through March 27, covering all plans from free to Max. The timing, three weeks into the QuitGPT exodus, positioned it as a direct offer to subscribers still deciding whether to cancel ChatGPT.

The Counterargument Nobody Wanted to Hear

Not everyone accepted QuitGPT's framing.

The SF Standard ran a contrarian headline on March 4: "'QuitGPT' is more of a meme than a movement." The piece pointed to the gap between online pledges and actual cancellations, noted that 50 in-person protesters outside OpenAI's HQ was a modest showing for a movement claiming millions of adherents, and questioned whether app uninstall data translated to real, sustained subscription losses.

The critique landed. QuitGPT's numbers mixed hard data (cancellation records from quitgpt.org) with soft signals (social media posts, app deletions). The 1.5 million figure cited most often appeared to include all forms of participation. The portion representing actual canceled paid subscriptions was not independently audited.

Some AI researchers also challenged the moral simplicity of the Anthropic-versus-OpenAI narrative. Anthropic, they pointed out, accepted a $30 billion funding round in the same period, continuing to grow one of the most heavily capitalized AI labs in the world. Its refusal of the Pentagon contract did not mean it had opted out of the AI race, just one specific contract within it.

Whether the Revised Deal Actually Changed Anything

The amended contract's surveillance carve-outs drew scrutiny from the moment they were published.

TechPolicy.Press identified five unresolved issues in the revised terms. The EFF's analysis went further, arguing that the phrase "consistent with applicable laws" was not a real limit because intelligence agencies had spent decades defining surveillance as lawful. Under their interpretation, the revised contract's domestic surveillance ban meant roughly what the original contract meant: whatever the Pentagon could argue was legal.

The NSA exclusion was also questioned. Critics noted it applied to the current contract only and that separate agreements could be structured to bring the NSA back in.

OpenAI did not dispute these readings publicly. The company's position was that the revised terms were an improvement over the original, and that the original deal, whatever its flaws, had been preferable to leaving a vacuum that a less safety-conscious contractor might fill.

That argument, essentially: "we're better than nothing," was not well received by QuitGPT organizers or civil liberties groups. It was, however, the same logic Altman had used to justify the original deal.

The Bottom Line

Sam Altman admitted the Pentagon deal was rushed. He amended the contract. He said OpenAI had genuinely been trying to prevent a worse outcome. Across three weeks, these admissions did not stop 1.5 million users from leaving, his head of robotics from resigning, or Anthropic from posting one of the fastest revenue ramps in AI history.

What QuitGPT revealed is less about a boycott's power to reverse a corporate decision and more about what it takes to destroy trust quickly. OpenAI did not need to act wrongly in every sense. It needed only to act in a way that felt wrong, at a moment when the alternative was visible and principled and one click away.

As Caitlin Kalinowski put it in her resignation statement: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."

The deliberation came afterward. That may be the problem.

Sources

Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths