ChatGPT uninstalls surged 295% in a single day. Claude hit No. 1 on the App Store. Sam Altman admitted the deal was "opportunistic and sloppy." The first consumer revolt in AI history is changing the industry in real time.
On the evening of Thursday, February 27, 2026, Sam Altman posted a brief announcement on X. OpenAI had signed an agreement with the U.S. Department of War to deploy its AI models on classified military networks. The company published its full blog post the following morning.
Within 48 hours, the backlash was unlike anything the AI industry had ever seen. ChatGPT uninstalls spiked 295% in a single day. One-star reviews flooded the App Store, up 775% on Saturday alone. Anthropic's Claude rocketed to the No. 1 free app in America. And sidewalks outside OpenAI's San Francisco headquarters were covered in chalk graffiti reading "Show the contract" and "Take a stand for civil liberty."
This was not a Twitter spat. This was a consumer uprising that moved market share.
The backstory nobody saw coming
To understand the Cancel ChatGPT explosion, you need to rewind a few weeks. The movement did not start with the Pentagon deal. It started with a tax filing and an ICE contract.
In late January 2026, FEC filings revealed that OpenAI president Greg Brockman and his wife had each donated $12.5 million to MAGA Inc., the pro-Trump super PAC. That $25 million contribution made Brockman one of the largest individual donors to the Trump campaign ecosystem, according to the Brennan Center for Justice.
Around the same time, a Department of Homeland Security AI inventory disclosed that U.S. Immigration and Customs Enforcement was using a resume screening tool powered by ChatGPT-4. For many users, the combination was toxic: the president of the company behind their daily AI assistant was bankrolling Trump while the product itself was being used in immigration enforcement.
A loose coalition of activists, climate organizers, and self-described "cyber libertarians" launched QuitGPT.org in early February. Actor Mark Ruffalo amplified the cause, posting to his millions of followers: "ChatGPT's President is Trump's biggest donor. Their tech powers ICE. It's time to boycott." That post garnered over 36 million views and more than 1.3 million likes, according to Tom's Guide.
MIT Technology Review covered the movement on February 10, when it had attracted over 200,000 sign-ups. By mid-February, QuitGPT's organizers claimed the number had grown to 700,000 supporters who pledged to cancel or had already canceled their ChatGPT subscriptions, according to Tom's Guide.
But the Pentagon deal turned a simmering protest into a wildfire.
Timeline of the crisis
Here is how events unfolded, week by week:
The speed of the escalation stunned industry observers. From Anthropic's refusal to OpenAI's renegotiation, the entire cycle played out in less than a week.
Anthropic draws the line
The Pentagon deal did not emerge in a vacuum. For weeks, the Department of Defense had been pressuring AI companies to grant unrestricted military access to their models.
Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a direct ultimatum in late February: allow the military to use Claude for "all lawful purposes" with no restrictions, or face consequences. The Pentagon wanted access without guardrails, specifically the ability to use AI models for domestic data collection and without requiring human oversight for weapons systems.
Anthropic refused. Amodei drew two red lines: no mass domestic surveillance of Americans, and no fully autonomous weapons. The company's position was firm and public. In a public statement on Anthropic's website on February 26, Amodei wrote:
"We cannot in good conscience accede to their request."
He added: "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
The Pentagon's response was swift and severe. On February 27, President Trump ordered all federal agencies to stop using Anthropic's products. Defense Secretary Hegseth designated Anthropic a "supply chain risk to national security", a label typically reserved for companies from adversarial nations like Huawei, according to Axios. The designation not only ended Anthropic's $200 million Defense Department contract but also barred any military contractor from doing business with the company.
Pentagon official Emil Michael was reportedly on the phone trying to offer Anthropic a last-minute deal at the exact moment Hegseth publicly tweeted the supply chain risk designation, according to Axios.
OpenAI steps in, hours later
The timing of what happened next is what ignited the firestorm.
Just hours after Anthropic was blacklisted for refusing the Pentagon's terms, OpenAI announced it had signed its own deal. The optics were devastating. As MIT Technology Review put it: "OpenAI's compromise with the Pentagon is what Anthropic feared."
OpenAI's blog post, titled "Our agreement with the Department of War," outlined three stated red lines: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions. The company emphasized this was a cloud-only deployment with OpenAI retaining "full discretion" over its safety stack.
But critics immediately spotted gaps. The original contract language allowed use for "all lawful purposes" and did not explicitly prohibit the collection of commercially acquired data on Americans, including geolocation, web browsing history, and personal financial information purchased from data brokers. This was the exact loophole Anthropic had refused to accept, according to Fortune.
OpenAI alignment researcher Leo Gao criticized his own employer on X, calling the stated safeguards "window dressing" on top of a contract that permitted broad use. Research scientist Aidan McLaughlin posted that he "did not think this deal was worth it," a statement that drew nearly 500,000 views, according to Fortune.
By Saturday, nearly 93 current OpenAI employees had signed an open letter titled "We Will Not Be Divided," supporting Anthropic's stance and urging AI companies to "put aside their differences and stand together," as reported by TechCrunch. Employees at Google also joined as signatories, according to The Hill.
The exodus begins
The consumer response was immediate, measurable, and massive.
| Metric | Before (Feb 27) | After (Feb 28-Mar 1) | Change | Source |
|---|---|---|---|---|
| ChatGPT daily uninstalls (U.S.) | Baseline (~9% daily fluctuation) | 295% spike on Feb 28 | +295% day-over-day | TechCrunch / Sensor Tower |
| ChatGPT 1-star reviews | Baseline | 775% surge on Feb 28, +100% on Mar 1 | +775% on day one | Sensor Tower |
| ChatGPT 5-star reviews | Baseline | Dropped 50% | -50% | Sensor Tower |
| ChatGPT U.S. downloads | Baseline | -13% on Feb 28, -5% on Mar 1 | Declining | TechCrunch |
| Claude U.S. downloads | Baseline | +37% on Feb 27, +51% on Feb 28 | Surging | TechCrunch |
| Claude App Store rank | ~42nd (early Jan 2026) | No. 1 free app (Feb 28) | Top of charts | CNBC |
| Claude Android rank | Outside top 10 | Top 10 on Google Play | Top 10 entry | App store data |
Claude hit No. 1 on Apple's U.S. App Store on the evening of Saturday, February 28, overtaking ChatGPT, and remained there through Monday, March 2, according to CNBC. Anthropic reported a 60% increase in free users since January and said it had doubled its paid subscriber base over the past year, according to TechCrunch.
On Android, Claude broke into the top 10 on Google Play for the first time.
The QuitGPT campaign's claimed participant count surged to 1.5 million after the Pentagon deal, up from 700,000 just days earlier, though Cybernews noted this figure includes social media shares and website signups, not just verified cancellations. More than 17,000 people signed formal pledges on the QuitGPT website declaring they had canceled or would cancel their ChatGPT subscriptions.
Key Insight: This represents less than 0.2% of ChatGPT's total user base. But the signal it sends to investors, employees, and the broader market is disproportionate to the raw numbers.
The chalk wars of San Francisco
The backlash was not limited to app stores and social media. It spilled onto the literal streets.
Mission Local reported that unknown San Franciscans covered the sidewalks outside OpenAI's offices with chalk messages attacking the Pentagon deal. Messages included "Show the contract" and "Take a stand for civil liberty." Meanwhile, the sidewalk outside Anthropic's nearby offices was decorated with supportive messages, including a viral piece of chalk art reading "you give us courage," according to Mission Local.
The physical protests underscored something unusual: this was not a niche tech-policy debate. Ordinary consumers were treating their choice of AI chatbot as a moral and political statement.
Altman blinks
By Monday, March 3, the pressure was too much. Sam Altman published an internal memo, portions of which were reported by Fortune, CNBC, and Axios, acknowledging the deal had been mishandled.
"The issues are super complex, and demand clear communication."
He continued:
"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."
Altman announced that OpenAI was renegotiating the contract with the Pentagon. The key changes, as reported by Axios:
Original terms: The contract referenced only "private information" when prohibiting surveillance, leaving commercially acquired data (geolocation, browsing history, financial records from data brokers) unaddressed.
Revised terms: The new contract language explicitly states that OpenAI's AI systems shall not be "intentionally used for domestic surveillance of U.S. persons and nationals," consistent with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978. The amendment now covers "commercially acquired" personal or identifiable information.
Altman also confirmed on X that the Pentagon has affirmed OpenAI's services will not be used by intelligence agencies such as the NSA, and any extension to those agencies would require a separate contract modification.
Key Insight: The renegotiation represents a rare case where consumer backlash directly altered the terms of a classified government contract. Whether the new language provides meaningful protection or remains, as Leo Gao described the original, "window dressing," will depend on enforcement mechanisms that remain unclear.
Anthropic seizes the moment
While OpenAI scrambled to contain the damage, Anthropic moved quickly to capitalize.
On March 2, Anthropic launched a memory import tool that lets users transfer their saved preferences and context from ChatGPT, Gemini, or any other AI provider directly into Claude. As reported by MacRumors and 9to5Mac, the tool works through a simple copy-paste process: users paste a ready-made prompt into their existing AI assistant, which exports everything it knows about them, then paste the result into Claude, which updates its memory accordingly.
Anthropic also made Claude's memory feature free for all users, removing a barrier that had previously limited it to paid subscribers. The timing was strategic: the company was making it as easy as possible for disgruntled ChatGPT users to switch without losing the context and personalization they had built up over months, as reported by 9to5Mac.
The combination of moral positioning and practical switching tools proved effective. Anthropic was not just the principled alternative. It was the convenient one.
The bigger picture: $730 billion meets $380 billion
The Cancel ChatGPT movement arrives at a precarious moment for both companies.
OpenAI finalized a $110 billion funding round on February 27, the same day as the Pentagon announcement, valuing the company at $730 billion, according to Bloomberg. The round included $50 billion from Amazon, $30 billion from Nvidia, and $30 billion from SoftBank, making it the largest private financing in history, according to TechCrunch.
Anthropic, valued at approximately $380 billion after its February 2026 Series G round, has raised over $60 billion in total funding. But it now faces a government that has labeled it a national security risk. The company has said it will challenge the supply chain risk designation in court, but the designation potentially threatens its relationships with defense contractors and government-adjacent partners.
The irony is striking. OpenAI has never been richer. And yet it is bleeding users and trust at a rate that its $730 billion valuation did not anticipate. Anthropic has never been more politically vulnerable. And yet it is gaining users faster than at any point in its history.
| OpenAI | Anthropic | |
|---|---|---|
| Latest valuation | $730 billion | ~$380 billion |
| Latest funding | $110 billion (Feb 27) | $60 billion+ cumulative |
| App Store rank (Mar 2) | No. 2 (falling) | No. 1 (rising) |
| Government status | Pentagon contractor | "Supply chain risk" |
| Employee sentiment | 70+ signed letter opposing deal | Internal morale reportedly high |
| User trend | Uninstalls up 295% | Downloads up 51% |
What this means for the AI industry
The Cancel ChatGPT movement is the first time consumer backlash has materially shifted market share in the AI industry. Previous controversies, from deepfakes to copyright lawsuits, generated headlines but did not move download numbers. This one did.
Several dynamics are worth watching:
The ethics premium is real. For years, AI safety advocates have argued that responsible development would eventually become a competitive advantage. The events of late February 2026 are the strongest evidence yet that this thesis holds in the consumer market.
Switching costs are lower than anyone thought. AI chatbots have far weaker lock-in effects than traditional software. Users can and will leave if given a reason and a smooth migration path. Anthropic's memory import tool demonstrates that the barriers to switching are almost nonexistent.
Government contracts carry consumer risk. OpenAI's deal with the Pentagon may generate revenue, but it has cost the company something harder to quantify: the trust of its most vocal users. In an industry where word-of-mouth drives adoption, that trust deficit could compound.
The political dimension is permanent. The QuitGPT movement started over political donations and ICE contracts before the Pentagon deal ever happened. AI companies are now firmly part of the culture war, and users are treating their choice of chatbot the way they treat their choice of coffee shop or sneaker brand.
Retention is the real test. The initial surge in Claude downloads is dramatic, but the question that matters is what happens in 30 days. Will new Claude users stay, or will they drift back to ChatGPT once the news cycle moves on? Anthropic's decision to make memory free and launch an import tool suggests the company is thinking about retention, not just acquisition. If even a fraction of switchers become long-term users, competitive of the AI industry will look meaningfully different by the end of 2026.
Conclusion
The Cancel ChatGPT story is still unfolding. As of March 3, Claude remains No. 1 on the App Store. OpenAI is renegotiating its Pentagon contract. Anthropic is fighting a "supply chain risk" designation in what may become a landmark legal case about the government's use over private AI companies.
What is already clear is that the dynamics of the AI market have changed. Users have shown they will act on their values, and that they have alternatives. The era when ChatGPT could count on default loyalty, simply because it was first and most familiar, appears to be ending.
Sam Altman called the deal "opportunistic and sloppy." The users who uninstalled his app would probably use stronger words.
For more on how AI companies handle safety and ethics, read our deep dives on how large language models work and context engineering for LLMs.