On March 3, 2026, Sam Altman sat down to explain himself.
Three days after OpenAI signed a classified military deal with the Pentagon, the company's CEO posted a statement on X admitting the rollout was rushed. "We were genuinely trying to de-escalate things and avoid a much worse outcome," he wrote, "but I think it just looked opportunistic and sloppy."
It was one of the fastest public admissions of fault from a major tech CEO in recent memory. It was also, by most measures, too late.
For context: The QuitGPT movement began in late February 2026 after OpenAI signed a deal to deploy its models on the Pentagon's classified network, just hours after Anthropic CEO Dario Amodei publicly refused the same contract. ChatGPT uninstalls surged 295% in a single day. Claude hit the top of the App Store. Our earlier coverage explains how it started.
How the Movement Reached 1.5 Million
By March 3, the day Altman published his mea culpa, QuitGPT organizers reported 1.5 million participants. The count included users who had canceled paid subscriptions, deleted the app, or signed the boycott pledge at quitgpt.org.
Some outlets cited even higher figures. The Insane App and Sovereign Magazine reported the number climbing past 2.5 million across the first two weeks of March, counting social media commitments alongside subscription cancellations. Those figures are harder to verify independently, but the core data point — 1.5 million confirmed actions by early March — appeared across multiple sources including BusinessToday and Euronews.
The surge was driven by a simple, devastating contrast. Anthropic's Dario Amodei had walked away from what was reportedly a $200 million contract, stating he "cannot in good conscience accede" to unrestricted military access. OpenAI signed an almost identical deal hours later.
That symmetry gave the boycott a clean moral argument: one AI CEO said no on principle; the other said yes for the money.
Altman Renegotiated. Critics Said It Wasn't Enough.
Facing the exodus, OpenAI moved quickly. By March 3, Altman confirmed the company was renegotiating the Pentagon deal to add explicit safeguards.
The revised contract, reported by Axios, added language barring OpenAI's technology from being used for domestic surveillance of U.S. persons. It also explicitly excluded the NSA and other intelligence agencies from accessing OpenAI services under the agreement without separate contract modifications.
Altman's public statement was direct. He said the company "shouldn't have rushed" the announcement and that the original timing — released on a Saturday, the day after Anthropic was sanctioned on Friday — made OpenAI look like it was opportunistically filling a vacuum left by a punished competitor.
The Electronic Frontier Foundation, a digital rights nonprofit, was unmoved. In a March 2026 analysis titled "Weasel Words," the EFF argued the revised language was insufficient. "Secret agreements and technical assurances have never been enough to rein in surveillance agencies," the EFF wrote. "They are no substitute for strong, enforceable legal limits and transparency."
The EFF's specific concern was the phrase "consistent with applicable laws," which it argued the government had historically interpreted expansively, not as a genuine ban on surveillance.
The Insider Defection
On March 7, Caitlin Kalinowski, OpenAI's robotics hardware lead, resigned.
Kalinowski's departure was the most significant internal break over the Pentagon deal. In her public statement, she wrote that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
Her specific complaint echoed Altman's own admission: the deal was rushed. But where Altman framed the problem as optics, Kalinowski framed it as process. The guardrails, she argued, had not been defined before the contract was signed.
TechCrunch, Bloomberg, NPR, and Fortune all covered the resignation. It reframed the story from a consumer boycott into an employee accountability question.
How It Unfolded
Anthropic's Numbers Told the Story
While the protests played out on the streets, the business data was unambiguous.
Anthropic's annualized revenue reached $19 billion in March 2026, up from $14 billion in February. That $5 billion monthly jump is, by any measure, extraordinary. The company had been adding over one million new daily signups for weeks.
The Register reported on March 19 that Claude's business subscription share had grown to 24.4%, compared to OpenAI's 34.4%. Ramp, an AI financial analytics firm, found that nearly one in four businesses using its platform now paid for Anthropic, up from one in 25 twelve months ago. Businesses selecting AI tools for the first time chose Anthropic approximately 70%** of the time**.
Business subscription data showed Claude gaining 4.9% month-over-month in February while OpenAI lost 1.5%. The direction of travel was clear.
On March 13, Anthropic made a move that looked calculated. It launched a promotion doubling Claude's usage limits during off-peak hours through March 27, covering all plans from free to Max. The timing, three weeks into the QuitGPT exodus, positioned it as a direct offer to subscribers still deciding whether to cancel ChatGPT.
The Counterargument Nobody Wanted to Hear
Not everyone accepted QuitGPT's framing.
The SF Standard ran a contrarian headline on March 4: "'QuitGPT' is more of a meme than a movement." The piece pointed to the gap between online pledges and actual cancellations, noted that 50 in-person protesters outside OpenAI's HQ was a modest showing for a movement claiming millions of adherents, and questioned whether app uninstall data translated to real, sustained subscription losses.
The critique landed. QuitGPT's numbers mixed hard data (cancellation records from quitgpt.org) with soft signals (social media posts, app deletions). The 1.5 million figure cited most often appeared to include all forms of participation. The portion representing actual canceled paid subscriptions was not independently audited.
Some AI researchers also challenged the moral simplicity of the Anthropic-versus-OpenAI narrative. Anthropic, they pointed out, accepted a $30 billion funding round in the same period, continuing to grow one of the most heavily capitalized AI labs in the world. Its refusal of the Pentagon contract did not mean it had opted out of the AI race, just one specific contract within it.
Whether the Revised Deal Actually Changed Anything
The amended contract's surveillance carve-outs drew scrutiny from the moment they were published.
TechPolicy.Press identified five unresolved issues in the revised terms. The EFF's analysis went further, arguing that the phrase "consistent with applicable laws" was not a real limit because intelligence agencies had spent decades defining surveillance as lawful. Under their interpretation, the revised contract's domestic surveillance ban meant roughly what the original contract meant: whatever the Pentagon could argue was legal.
The NSA exclusion was also questioned. Critics noted it applied to the current contract only and that separate agreements could be structured to bring the NSA back in.
OpenAI did not dispute these readings publicly. The company's position was that the revised terms were an improvement over the original, and that the original deal, whatever its flaws, had been preferable to leaving a vacuum that a less safety-conscious contractor might fill.
That argument, essentially: "we're better than nothing," was not well received by QuitGPT organizers or civil liberties groups. It was, however, the same logic Altman had used to justify the original deal.
The Bottom Line
Sam Altman admitted the Pentagon deal was rushed. He amended the contract. He said OpenAI had genuinely been trying to prevent a worse outcome. Across three weeks, these admissions did not stop 1.5 million users from leaving, his head of robotics from resigning, or Anthropic from posting one of the fastest revenue ramps in AI history.
What QuitGPT revealed is less about a boycott's power to reverse a corporate decision and more about what it takes to destroy trust quickly. OpenAI did not need to act wrongly in every sense. It needed only to act in a way that felt wrong, at a moment when the alternative was visible and principled and one click away.
As Caitlin Kalinowski put it in her resignation statement: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
The deliberation came afterward. That may be the problem.
Sources
- OpenAI CEO Sam Altman defends decision to strike Pentagon deal, admits 'optics don't look good' — Fortune (Mar 2, 2026)
- Sam Altman says OpenAI renegotiating 'opportunistic and sloppy' deal with the Pentagon — Fortune (Mar 3, 2026)
- OpenAI's Altman admits defense deal 'looked opportunistic and sloppy' amid backlash — CNBC (Mar 3, 2026)
- Scoop: OpenAI, Pentagon add more surveillance protections to AI deal — Axios (Mar 3, 2026)
- 'QuitGPT' is more of a meme than a movement — SF Standard (Mar 4, 2026)
- 'QuitGPT' protesters rally outside OpenAI HQ in San Francisco — Local News Matters (Mar 4, 2026)
- OpenAI hardware exec Caitlin Kalinowski quits in response to Pentagon deal — TechCrunch (Mar 7, 2026)
- OpenAI Robotics Chief Resigns Over Pentagon AI Deal Citing Ethical Concerns — Bloomberg (Mar 7, 2026)
- Weasel Words: OpenAI's Pentagon Deal Won't Stop AI-Powered Surveillance — Electronic Frontier Foundation (Mar 2026)
- Five Unresolved Issues in OpenAI's Deal With the Department of Defense — TechPolicy.Press (Mar 2026)
- Anthropic's Claude claws its way towards the top of AI chart — The Register (Mar 19, 2026)
- 1.5 Million Users Are Leaving ChatGPT. Should You Quit Too? — UC Strategies (Mar 2026)
- Over 1.5 million people join ChatGPT boycott ahead of protest at OpenAI's HQ — Cybernews (Mar 2026)
- Boycott movement against ChatGPT grows amid OpenAI's Pentagon deal — KTVU FOX 2 (Mar 2026)
- Stop the AI Race — March Press Coverage (Mar 21, 2026)