On Friday, Trump ordered every federal agency to "immediately cease" using Anthropic's technology. On Saturday, the Pentagon used Claude to identify targets in the largest American military operation in decades.
On the evening of February 27, 2026, President Donald Trump posted a message on Truth Social that left no room for interpretation. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," he wrote. He called the company "RADICAL LEFT" and declared: "We don't need it, we don't want it, and will not do business with them again!"
Less than 24 hours later, the United States launched Operation Epic Fury, a massive coordinated strike campaign against Iran, hitting more than 1,250 targets across cities in Iran in the opening 48 hours. And according to multiple reports, the AI model powering critical elements of the operation's intelligence and targeting pipeline was Claude, built by the same company Trump had just blacklisted.
The contradiction is not subtle. It is the kind of thing that, in a less chaotic moment, might have dominated news cycles for weeks. But in the current environment, the story of how America's most important military AI tool belongs to a company the president publicly declared an enemy has been somewhat lost beneath the fog of the strikes themselves.
The feud that started it all
To understand how this happened, you have to go back several weeks.
In January 2026, Defense Secretary Pete Hegseth issued a memorandum titled "Artificial Intelligence Strategy for the Department of War," laying out his vision for accelerating America's military AI dominance. The memo directed that all Department of Defense AI contracts adopt standard "any lawful use" language within 180 days, according to Lawfare. In plain terms, the Pentagon wanted AI companies to hand over their models with no strings attached. Whatever the military deemed lawful, the AI should do.
This created a direct collision with Anthropic. The company's contract with the Pentagon, a two-year agreement worth up to $200 million, included two specific restrictions. Claude could not be used for mass domestic surveillance of American citizens. And it could not power fully autonomous weapons systems that make lethal decisions without human oversight.
Anthropic had held these positions since the beginning of the contract. Hegseth wanted them gone.
On February 13, Axios reported that tensions were escalating after reports emerged that Claude had been used in the operation to capture Venezuelan dictator Nicolas Maduro. The Pentagon had used Claude through its integration with Palantir's platforms on classified networks for intelligence analysis during the raid, according to Fox News. Anthropic raised concerns about this use, and the relationship began to deteriorate.
By February 24, Axios reported that Hegseth had given Anthropic an ultimatum: remove all safeguards by Friday, February 27, or face consequences.
Anthropic drew two red lines
Anthropic CEO Dario Amodei did not blink.
"We cannot in good conscience accede to their request."
In a CBS News interview, Amodei laid out his position clearly. "We have these two red lines," he said. "We've had them from Day One." The two prohibitions, no mass surveillance, no autonomous weapons, were not negotiable.
"We are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. We believe in defeating our autocratic adversaries. We believe in defending America."
Amodei explained that Anthropic objected not because it opposed working with the military, but because "we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons" and that deploying them that way would "endanger America's warfighters and civilians."
The Pentagon did not take this well. Emil Michael, the undersecretary of defense for research and engineering and a former Uber executive, publicly called Amodei "a liar" who "has a God-complex," according to Fortune. Hegseth called Anthropic "sanctimonious." And Trump branded it a "radical left, woke company" that was trying to "STRONG-ARM the Department of War."
On February 27, the hammer fell. Trump ordered all federal agencies to stop using Anthropic. The Pentagon designated the company a supply chain risk to national security, a classification normally reserved for foreign adversaries, according to the Washington Post. Agencies were given six months to phase out any Anthropic products, and military contractors were barred from doing business with the company.
Then the bombs started falling
The very next day, February 28, the United States and Israel launched Operation Epic Fury.
The scale was staggering. Joint strikes hit Tehran, Isfahan, Qom, Karaj, Kermanshah, and several other cities. Targets included Islamic Revolutionary Guard Corps command and control facilities, air defense systems, missile and drone launch sites, and military airfields, according to CNN. CBS News reported that 40 Iranian officials were killed, including Supreme Leader Ali Khamenei.
And Claude was in the loop.
According to reports from Futurism, the Times of Israel, and multiple other outlets, US Central Command (CENTCOM) used Claude models during the operation for three critical functions:
| Function | What Claude Did |
|---|---|
| Intelligence assessment | Analyzed intercepts, satellite imagery, and signals intelligence to evaluate threats, enemy positions, and situational awareness |
| Target identification | Located, prioritized, cross-referenced, and confirmed high-value targets including leadership compounds and strategic military sites |
| Battle scenario simulation | Modeled potential outcomes, rehearsed strike sequences, predicted risks and collateral damage, and supported operational planning |
Key Insight: Claude was not used to independently control weapons or make lethal decisions. Its role was decision-support, processing vast amounts of data and presenting analysis to human operators who made the final calls. But the line between "support" and "decision-making" grows thinner as AI becomes more deeply embedded in the kill chain.
Why the ban did not stop the bombs
The reason is straightforward: Claude is too deeply embedded to rip out overnight.
Claude is the only frontier AI model currently deployed across the Department of Defense's classified systems, according to Defense One. It operates through Anthropic's partnership with Palantir Technologies and runs on Amazon Web Services' GovCloud infrastructure. The integration spans intelligence platforms, operational planning tools, and battlefield analytics systems that took years to build.
The six-month phase-out timeline that Trump's own administration set implicitly acknowledged this reality. You cannot swap out the AI backbone of a military intelligence apparatus over a weekend, especially not the same weekend you are launching the largest US military operation since the 2003 invasion of Iraq.
As Michael Burry, the "Big Short" investor, noted in a widely shared post on X, the fact that the Pentagon needed six months to transition away from Claude proved how dependent the military had already become on a single AI vendor.
OpenAI rushed in, then admitted it was a mistake
Within hours of Trump's ban on Anthropic, OpenAI CEO Sam Altman posted on X that his company had struck a deal with the Department of Defense to deploy its models on classified networks. The deal was also worth $200 million, according to CNBC.
The timing could not have looked worse.
Earlier that same week, Altman had told employees that OpenAI shared the same "red lines" as Anthropic, no mass surveillance, no autonomous weapons. Then Anthropic got blacklisted for holding those lines, and OpenAI swooped in to take the contract.
The backlash was immediate. By Monday, Altman was in damage control mode.
"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."
Altman admitted he "shouldn't have rushed" to announce the deal on the same Friday Anthropic was banned. He later posted what he described as an internal memo on X, saying OpenAI would amend the contract to include explicit language that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," according to CNBC.
The irony was not lost on observers. The Pentagon had accepted from OpenAI the exact same two restrictions it had punished Anthropic for demanding.
The timeline tells the story
Here is the sequence of events, compressed into just two weeks:
900 tech workers drew their own red lines
The fallout did not stay inside the Pentagon or the C-suite. It spread to the engineers who actually build these AI systems.
An open letter titled "We Will Not Be Divided" began circulating among employees at Google and OpenAI on February 27, the same day as the Anthropic ban. By Monday, it had gathered nearly 900 signatures, approximately 800 from Google and nearly 100 from OpenAI, according to CNBC and Axios.
The letter took direct aim at the Pentagon's strategy of playing AI companies against each other.
"They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand."
The employees' demands focused on the same two red lines Anthropic had drawn: no domestic mass surveillance and no fully autonomous lethal systems. They called on company leadership to set explicit military-use limits rather than leaving boundaries to contract-by-contract negotiation.
The consumer response was equally dramatic. ChatGPT uninstalls in the US jumped 295% day-over-day on February 28, compared to a typical daily fluctuation of about 9%, according to TechCrunch. One-star reviews for ChatGPT surged 775% on Saturday. Meanwhile, Claude downloads rose 37% on Friday and 51% on Saturday. By Saturday, Claude's total daily US downloads surpassed ChatGPT's for the first time, and the app climbed to No. 1 on the US App Store, according to Fortune.
The deeper question nobody is answering
The Claude-in-Iran-strikes story reveals something more troubling than hypocrisy or bureaucratic inertia. It exposes how deeply AI is already woven into the fabric of modern warfare, and how unprepared our institutions are to govern that reality.
Consider the position the Pentagon is in. It spent years integrating Claude into its classified systems through Palantir and AWS GovCloud. The model processes satellite imagery, intercepts, signals intelligence. It helps identify targets and simulate outcomes. It has been used in at least two major military operations, the Maduro capture and the Iran strikes, in the span of weeks.
And the company that built it just got labeled a national security threat by the president of the United States.
Anthropic is not opposed to military work. Amodei has said repeatedly that the company wants to support national security. The company's own announcement when it first partnered with Palantir and AWS described the goal as advancing "responsible AI in defense operations." But it drew two lines: no mass surveillance of Americans, and no autonomous killing machines. These are not radical positions. They are, in fact, the same restrictions that OpenAI ultimately agreed to when it took over the contract.
The real question is whether any AI company can maintain ethical boundaries when the full force of the federal government is pushing in the opposite direction. Anthropic tried, and got blacklisted. OpenAI said it shared the same values, then rushed to take the deal, and had to walk back the optics within days.
Legal scholars at Lawfare have already questioned whether the Pentagon's designation of Anthropic as a supply chain risk will survive legal challenge. The designation was designed for foreign adversaries and compromised vendors, not domestic companies that disagree with contract terms. NYU's Stern Center for Business and Human Rights called the situation a test case for "the cost of conscience" in AI governance.
Key Insight: The Pentagon's own AI strategy memorandum demanded "any lawful use" language in all contracts. But what is lawful and what is wise are two very different things. Mass surveillance of Americans may be technically lawful under certain authorities. That does not make it a good idea, and the engineers who build these systems know it.
What comes next
The six-month phase-out of Claude from military systems has begun, at least officially. OpenAI and potentially xAI (Elon Musk's company) are being brought in as replacements. But defense analysts quoted by Defense One have noted that the transition will be far from simple. Claude's integration into Palantir's classified platforms is deep, and swapping one large language model for another in production military systems is not like switching apps on your phone.
The technical challenge is not just about replacing one model with another. Anthropic built custom "Claude Gov" models specifically for national security customers, according to the company's own documentation. These models were purpose-built for the security requirements and workflows of classified environments. Replicating that level of integration with a different model architecture, different fine-tuning, and different safety characteristics will take significant engineering effort, and carries its own operational risks during the transition period.
Meanwhile, the Iran operation continues. The initial strikes hit more than 1,250 targets. Iran has vowed retaliation, with missile attacks reported against US facilities in Qatar, Kuwait, the UAE, and Bahrain, according to CNN. The Red Crescent reported 201 civilian casualties and 747 injuries on the first day alone, with military deaths estimated at 1,300 by the organization Hengaw.
The Scientific American noted that the Anthropic-Pentagon collision represents a "safety-first AI" philosophy crashing headfirst into the realities of wartime demand. As Claude expands into autonomous agents capable of executing multi-step tasks, the stakes of military AI governance will only grow higher.
Claude may have helped plan those strikes. Within weeks, it may be gone from the systems entirely. And the question of what AI does in war, and who gets to set the rules, will remain unanswered long after the last missile falls.
Conclusion
The story of Claude in the Iran strikes is not really about one AI model or one company. It is about the collision between three forces that are only going to get more powerful: the military's appetite for AI, the technology industry's ethical obligations, and the political machinery that can reward or punish companies based on whether they comply.
Anthropic built the most capable AI system the Pentagon has ever used on classified networks. It asked for two conditions: do not spy on Americans, and do not build autonomous killing machines. For that, it was called a national security threat.
The market responded by making Claude the most downloaded app in America. Nearly 900 engineers across Silicon Valley signed a letter saying the Pentagon's demands crossed a line. And the company that stepped in to replace Anthropic ended up adopting the same restrictions it was punished for requesting.
None of this is resolved. The AI is already in the weapons systems. The question now is whether the people who build it get any say in how it is used, or whether "any lawful use" becomes the only standard that matters.