Dario Amodei rejected the Pentagon's final offer. Sam Altman backed him. 330 AI researchers signed an open letter. The deadline expires at 5:01 PM today.
By LDS Team
February 27, 2026
At 5:01 PM Eastern Time today, the Pentagon's ultimatum to Anthropic expires. Three days ago, Defense Secretary Pete Hegseth sat across from Anthropic CEO Dario Amodei and told him to remove all restrictions on how the military uses Claude -- or face a Defense Production Act order and a supply chain risk designation that would effectively blacklist the company from every Pentagon contractor in America.
Yesterday, Amodei published his answer: no.
"We cannot in good conscience accede to their request," he wrote in an official statement on Anthropic's website.
What followed was the most extraordinary 72 hours in the history of the AI industry. Sam Altman broke ranks to back Anthropic. Over 330 AI researchers signed an open letter refusing to be divided. Senator Thom Tillis called the Pentagon's approach "sophomoric." And a retired general who led the very program the Pentagon keeps comparing this to said he sides with Anthropic.
Here is how it happened.
For context: This article picks up where our previous coverage left off. That piece covered the initial rupture -- how Claude ended up in the Venezuela raid, the Pentagon's threats, and Anthropic's two red lines.
The Tuesday Meeting
On Tuesday, February 24, Hegseth and Amodei met face-to-face at the Pentagon. It was their first in-person meeting since the crisis went public on February 13, when the Wall Street Journal revealed that Claude had been used during the US military's January raid on Venezuela.
According to reporting from the Washington Post and CNBC, the meeting lasted approximately 90 minutes.
Hegseth presented what officials described as a "final offer." The terms: Anthropic would agree to make Claude available for all lawful military applications, with no restrictions beyond existing federal law. In exchange, the Pentagon would extend Anthropic's $200 million contract and drop the supply chain risk review.
Amodei countered with what Anthropic had been proposing for weeks. Full military support -- missile defense, intelligence analysis, logistics, cybersecurity, battlefield communications. Everything except two categories: fully autonomous weapons that operate without human approval, and mass surveillance of American citizens.
Hegseth rejected the counter. He set the deadline: Friday, February 27, 5:01 PM Eastern Time.
The Nuclear Hypothetical
The standoff's most revealing moment did not happen at the February 24 meeting. It happened weeks earlier. According to the Washington Post, during a December phone call, a Pentagon official presented Amodei with a hypothetical scenario:
An ICBM is inbound toward American soil. The military needs Claude to compute intercept trajectories for a missile defense system. Does Anthropic's policy allow it?
Amodei's response, according to Anthropic, was immediate: yes. Missile defense is a defensive application with clear human oversight. It falls within Anthropic's acceptable use policy. Anthropic has been offering missile defense support throughout the negotiations.
But the Pentagon's version of the exchange is different. Defense officials later claimed that Amodei told the military to "call us" if a nuclear scenario arose -- implying that Anthropic wanted to approve military uses on a case-by-case basis, inserting itself into the chain of command during a national emergency.
Anthropic called this characterization "patently false."
What was said on a single phone call between two parties who no longer trust each other has itself become a point of dispute. Two sides are no longer just arguing about policy. They are arguing about basic facts.
Amodei's Official Response
On Thursday, February 26, Amodei published a lengthy statement on Anthropic's website. It systematically rejected the Pentagon's position.
The opening line set the tone: "We cannot in good conscience accede to their request. They have asked us to remove all restrictions on Claude's use by the Department of Defense, including for autonomous weapons and mass surveillance. We have offered comprehensive military support within our two guardrails. They have rejected every proposal."
Then Amodei pointed out what he called the fundamental contradiction in the Pentagon's threats:
"The two threats the Department has made are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. Both cannot be true simultaneously."
The statement included a detailed breakdown of exactly what Anthropic has offered the military:
| Military Application | Anthropic's Position |
|---|---|
| Missile defense systems | Approved |
| Intelligence analysis and summarization | Approved |
| Military logistics and supply chain optimization | Approved |
| Cybersecurity and threat detection | Approved |
| Battlefield communications and translation | Approved |
| Autonomous weapons without human oversight | Refused |
| Mass surveillance of American citizens | Refused |
The statement closed with: "We remain ready to continue our work to support the national security of the United States."
The Industry Rallied
What happened next turned a dispute between one company and the Pentagon into an industry-wide stand.
The open letter. On Friday morning, an open letter appeared at notdivided.org titled "We Will Not Be Divided." Within hours it had surpassed 330 signatories -- AI researchers and engineers from across the industry, including more than 300 from Google and over 60 from OpenAI.
The letter's central argument: "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War."
Sam Altman broke ranks. On the same morning, OpenAI CEO Sam Altman sent an internal memo to all OpenAI employees. CNBC obtained and published the key passages within hours.
"This is no longer just an issue between Anthropic and the Pentagon," Altman wrote. "This is an issue for the whole industry."
In a subsequent CNBC interview, Altman went further: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons. These are positions we share with Anthropic. If the government can force one company to abandon these guardrails, they can force any company."
This was a significant shift. OpenAI removed its explicit "military and warfare" ban from its usage policy in January 2024 and partnered with defense firm Anduril in December 2024. For Altman to publicly align with Anthropic's position against the Pentagon represents a material change.
Jack Shanahan sided with Anthropic. Perhaps the most striking endorsement came from retired Lieutenant General Jack Shanahan, the former head of the Pentagon's Joint Artificial Intelligence Center -- the military official who led Project Maven, the drone AI program that Google famously abandoned in 2018 after 4,000 employees signed a protest petition.
"Since I was square in the middle of Project Maven and Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote. "Yet I'm sympathetic to Anthropic's position. There is a difference between AI that supports warfighters and AI that replaces human judgment in lethal decisions."
The 72 Hours That Changed Everything
Congress Steps In
The Congressional response has been bipartisan -- and sharply critical of the Pentagon's approach.
| Lawmaker | Party | Statement |
|---|---|---|
| Sen. Thom Tillis (R-NC) | Republican | "Why in the hell are we having this discussion in public? Why isn't this occurring in a boardroom or in the secretary's office? This is sophomoric." |
| Sen. Mark Warner (D-VA) | Democrat | Said he was "deeply disturbed" by reports the Pentagon is "working to bully a leading U.S. company" |
| Sen. Chris Coons (D-DE) | Democrat | Co-signed a letter asking the Pentagon to extend the deadline |
| Sen. Roger Wicker (R-MS) | Republican | Chair of Armed Services Committee. Called for a classified briefing before any DPA action |
| Sen. Jack Reed (D-RI) | Democrat | Ranking member of Armed Services. Joined Wicker's call for a briefing |
Behind the scenes, bipartisan Senate leaders have been privately pressing both sides to extend the Friday deadline and continue negotiations. As of this writing, it is unclear whether the Pentagon has agreed.
The Legal Question Nobody Can Answer
The Pentagon's most powerful weapon in this dispute is the Defense Production Act. But legal experts are divided on whether it would actually work against an AI company.
Lawfare, the national security legal analysis publication, published a detailed analysis of the DPA's potential application. The core question: can the government use the DPA to compel a company to create a new product -- an unrestricted version of Claude -- rather than simply providing an existing product on different terms?
The DPA's Title I gives the President compulsion authority. Section 101 allows the government to require that government orders take priority over commercial ones and to compel acceptance of contracts. This is well-established law, historically used for physical goods -- semiconductors, medical supplies during COVID-19. Title III (Section 303) allows the President to expand domestic industrial base capabilities through purchases and subsidies, requiring a presidential finding of industrial shortfall.
Neither mechanism has ever been used against an AI company. Lawfare's conclusion: "Neither side's argument is a slam dunk."
The Pentagon's legal problem: the DPA was designed for physical supply chains, not intellectual property. Compelling Anthropic to strip safety guardrails from its model could raise First Amendment issues under the Supreme Court's 2024 ruling in Moody v. NetChoice, which established that algorithmic content moderation can constitute protected speech.
Anthropic's legal problem: if the Pentagon invokes the DPA, the penalties for noncompliance are criminal -- up to one year imprisonment and fines. According to Lawfare, the most likely scenario is that Anthropic would comply under protest while simultaneously filing for a temporary restraining order in federal court.
Worth noting: A supply chain risk designation has never been applied to an American company. The label is normally reserved for foreign adversaries like Huawei. Using it against a $380 billion American AI company would be legally unprecedented and would almost certainly trigger an immediate legal challenge.
The Voices on the Other Side
Not everyone supports Anthropic's position.
Emil Michael, the Pentagon's Under Secretary of Defense for Research and Engineering, escalated his rhetoric on X on Friday. He posted: "It's a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."
Elon Musk, whose xAI holds one of the four $200 million Pentagon AI contracts, posted on X: "Anthropic hates Western Civilization."
Mrinank Sharma, the former head of Anthropic's Safeguards Research Team, resigned from the company in early February. In a public resignation letter, Sharma wrote: "The world is in peril." His criticism was not about Anthropic's refusal of the Pentagon -- it was about what he saw as Anthropic's eroding safety commitments under competitive pressure. The timing of his departure, weeks before Anthropic released its updated Responsible Scaling Policy (v3.0, February 24), which dropped the hard commitment to pause model training if safety measures were inadequate, underscored his concerns. For Sharma, Anthropic was not standing firm enough -- just in a different direction.
Jensen Huang, NVIDIA's CEO, took a more measured stance: "I hope that they can work it out, but if it doesn't get worked out, it's also not the end of the world."
What Happens at 5:01 PM
As of this writing, three scenarios are on the table.
Scenario 1: The deadline extends. Bipartisan Senate pressure convinces the Pentagon to push back the deadline. Both sides return to the negotiating table. This is the outcome Congressional leaders are actively pushing for.
Scenario 2: The Pentagon acts. Hegseth invokes the Defense Production Act or initiates the supply chain risk designation process. Anthropic complies under protest and files for a temporary restraining order in federal court. The case becomes a landmark First Amendment test over whether the government can compel a company to modify its AI model's safety guardrails.
Scenario 3: A compromise emerges. Both sides agree to a framework that gives the military broader access to Claude while preserving some version of Anthropic's guardrails -- perhaps human-in-the-loop requirements on a narrower set of lethal applications. Neither side calls it a concession.
What happens next will not just determine Anthropic's relationship with the Pentagon. It will establish whether AI companies have the legal right to set ethical boundaries on their own technology -- even when the most powerful customer in the world demands otherwise.
The Bottom Line
In 2018, Google walked away from Project Maven because 4,000 of its employees demanded it. Google made a choice. By 2022, it was back.
In 2026, Anthropic is not walking away. It is standing in front of the Pentagon and saying: we will serve the military, but not like this. Not autonomous weapons without human oversight. Not mass surveillance of Americans. Everything else -- missile defense, intelligence, logistics, cybersecurity -- is on the table.
The Pentagon's response has been to deliver an ultimatum, threaten wartime production laws never before used against an American tech company, and begin preparing to blacklist Anthropic from the defense supply chain.
And then something the Pentagon did not expect happened. Sam Altman backed Anthropic publicly. Over 330 AI researchers across Google, OpenAI, and other companies signed a letter refusing to be divided. Senator Thom Tillis called the Pentagon's approach "sophomoric." And the retired general who led Project Maven -- the very precedent the Pentagon keeps citing -- said he is sympathetic to Anthropic's position.
What started as a contract dispute between one company and one government agency has become the defining question of the AI era: who gets to decide what AI will and will not do?
The answer arrives at 5:01 PM.
Sources
- Anthropic: Official Statement on Department of Defense Dispute (Feb 26, 2026)
- Washington Post: Pentagon Gave Anthropic a Friday Deadline to Comply With Military AI Demands (Feb 26, 2026)
- CNBC: Sam Altman Backs Anthropic in Pentagon AI Dispute, Calls It 'an Issue for the Whole Industry' (Feb 27, 2026)
- We Will Not Be Divided -- Open Letter from AI Researchers (Feb 27, 2026)
- Lawfare: Can the Pentagon Use the Defense Production Act to Compel Anthropic? (Feb 26, 2026)
- CNBC: Anthropic CEO Dario Amodei Rejects Pentagon's Final Offer on Military AI (Feb 26, 2026)
- Axios: Bipartisan Senate Pressure Mounts on Pentagon Over Anthropic Deadline (Feb 27, 2026)
- Semafor: The Anthropic-Pentagon Fight Just Became an Industry Crisis (Feb 27, 2026)
- The Verge: 330 AI Researchers Sign Letter Backing Anthropic's Military AI Guardrails (Feb 27, 2026)
- Breaking Defense: Pentagon CTO Calls Anthropic CEO 'a Liar' as Deadline Approaches (Feb 27, 2026)
- Reuters: NVIDIA's Huang Urges Compromise in Pentagon-Anthropic Standoff (Feb 27, 2026)
- LDS: The Pentagon Threatened to Blacklist Anthropic and the AI Industry Is Watching (Feb 19, 2026)