Tuesday evening, as most of Washington was wrapping up its workday, the Department of Defense filed a 40-page brief in a California federal court arguing that Anthropic could attempt to "disable its technology or preemptively alter the behavior of its model" in the middle of active military operations. The government's position: a private AI company that has drawn ethical lines around autonomous weapons and domestic surveillance cannot be trusted inside the U.S. military supply chain.
Within hours, the Democracy Defenders Fund released a counter-filing. One hundred and forty-nine former federal and state judges — appointed by both Republican and Democratic presidents — had signed an amicus brief declaring the Pentagon's legal reasoning defective and calling on the judiciary to step in.
The same government action meant to silence Anthropic had, in the span of a single day, drawn the largest coalition of former jurists ever assembled in an AI case.
For context: This is the fourth article in our series covering this dispute. Read the earlier coverage: the initial ethics showdown in February, the February 27 deadline Anthropic refused, and the lawsuits filed March 9 that launched this legal fight.
The DOD's Argument: A Company With Ethics Is a Liability
The Pentagon's filing lands on a single, striking logic: because Anthropic has publicly stated ethical limits, it cannot be trusted in a war zone.
The 40-page brief argues that Anthropic might "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate 'red lines' are being crossed." The government's worry is not that Anthropic has done this. It is that Anthropic might, because the company has already shown it is willing to draw lines.
The filing is the DOD's first formal legal rebuttal since Anthropic filed suit on March 9.
The roots of the dispute go back to a $200 million contract Anthropic signed with the Pentagon last summer to deploy Claude within classified systems. Negotiations later broke down over two specific points. Anthropic refused to allow Claude to be used in fully autonomous lethal weapons systems where no human operator remains involved in targeting or firing. It also refused to allow mass domestic surveillance of American citizens. The Pentagon wanted authorization for "all lawful" uses with no carve-outs.
Pentagon Undersecretary Emil Michael put the military's position bluntly: "If you have a Chinese hypersonic missile coming in, you may have 90 seconds to respond." The argument is that autonomous AI response times, not ethics, win future wars.
The 149 Judges: "Courts Have the Authority and Duty to Intervene"
The day before the DOD filed its rebuttal, a coalition organized by the Democracy Defenders Fund filed what may be the most unusual brief in this already unusual case.
One hundred and forty-nine former federal and state judges, appointed by presidents from both parties, argued that the DOD had misread the statute it invoked. The supply chain risk law, they said, is not a catch-all tool for procurement disputes. It is a narrow statute written to address malicious conduct — specifically "sabotage, unauthorized data extraction, or the introduction of unwanted functions by hostile actors."
Anthropic has not sabotaged anything. It has negotiated over contract terms. The judges' brief argues those are entirely different things, and that the Pentagon failed to follow the mandatory procedural requirements the statute demands before action can be taken.
The brief makes the case that courts do not have to defer to the executive branch merely because it invokes national security. "Courts have the authority and duty to intervene when the administration invokes national security concerns," the brief states, a direct challenge to the doctrine of broad executive deference in defense matters.
The coalition joins a remarkable list of backers that has assembled around Anthropic's legal challenge: Microsoft, Google and OpenAI employees (including chief Google scientist Jeff Dean), Catholic moral theologians, major tech industry groups, and former senior national security officials.
The Government's Supporters: Why Some Agree With the Pentagon
The "unacceptable risk" framing has defenders, and their arguments deserve a clear hearing.
National security hawks note that the underlying question is real: can the U.S. military depend on a private AI vendor that has reserved the right to define when its own technology may or may not be used? Paul Scharre, director of technology and national security at the Center for a New American Security, has noted that military operational dependency on commercial AI creates a genuine strategic vulnerability if vendors retain unilateral override authority.
Some critics argue Anthropic's position is less principled than it appears. The company drew its lines in negotiations over a government contract, not in a public policy forum. That looks, to some, like a company trying to use legal frameworks to win a commercial dispute it lost at the bargaining table.
Armed Forces Press published a piece titled "The Ideological Contamination of the Arsenal," arguing that allowing AI vendors to selectively opt out of military use cases on ethical grounds creates dangerous precedent for the entire defense supply chain. If Anthropic can say no to autonomous weapons, what stops the next company from saying no to targeting software, surveillance satellites, or nuclear command-and-control systems?
The DOD's worry about a company potentially disabling its own technology mid-operation is not purely hypothetical. Remote kill switches and model updates are real features of cloud-deployed AI. Operational dependency on a vendor that maintains that capability is a genuine risk, whatever the company's current intentions.
The Financial Damage Is Already Landing
While the legal arguments play out in court, the commercial damage is compounding.
CFO Krishna Rao, in a sworn declaration filed with the lawsuit, testified that the designation could reduce Anthropic's 2026 revenue "by multiple billions of dollars" and that the harm would be "almost impossible to reverse." Even under a conservative reading limited to direct government contracts, Rao said hundreds of millions in projected 2026 revenue are at risk.
The real-world losses have already started. One partner with a multi-million-dollar annual agreement replaced Claude with a competing AI system for an FDA-connected deployment, eliminating a revenue pipeline worth more than $100 million. Three financial institution negotiations have stalled as well — including one deal near closure and two others whose combined value exceeds 80 million dollars.
The designation has effectively served as a warning to any enterprise client with government ties: working with Anthropic carries political risk. That chilling effect was arguably the point.
Amazon and Google, Anthropic's two largest investors, have not taken the company's side publicly. Amazon invested $8 billion in the company; Google's stake came from a 3 billion dollar commitment. Both have said they will continue offering Anthropic's AI through their cloud platforms, but only for non-defense deployments. Amazon CEO Andy Jassy met with Hegseth and declined to back Anthropic when the dispute came up. Google, meanwhile, is deploying Gemini AI across the Pentagon's entire 3 million-person workforce — precisely the contract Anthropic walked away from.
The China Problem Nobody in Washington Wants to Discuss
The dispute has generated an uncomfortable irony on the international stage.
Anthropic has spent years arguing in Washington that Chinese AI companies pose a national security risk because Beijing can compel them to share data or modify systems without transparency. The Trump administration has now made an identical argument about Anthropic: a company that might alter its technology based on its own judgment cannot be trusted in classified systems.
Chinese state media and analysts have leaned into this with evident relish. Atlantic Council researchers tracking PRC commentary note the state-affiliated press has framed the episode as proof that the American AI ecosystem is structurally unreliable and that China's military-civil fusion model offers institutional clarity the U.S. lacks.
Chatham House and the Council on Foreign Relations have both published pieces arguing the dispute reveals governance failures with consequences well beyond Washington. If the U.S. government treats its own leading AI lab as a national security threat, the argument goes, it signals to allies and competitors alike that American AI infrastructure is politically unstable.
What Happens Next
A hearing on Anthropic's request for a preliminary injunction — a temporary court order blocking the supply chain designation while the case is heard — is scheduled for March 24.
Lawfare, which covers national security law, published an analysis arguing the Pentagon's designation is unlikely to survive judicial review. The supply chain risk statute was written for foreign adversary threats, not domestic contract disputes, and the procedural requirements for its invocation were not followed.
But even if Anthropic wins in court, the precedent question lingers. The case has surfaced a gap in AI governance that no existing law addresses cleanly: when an AI company and the government disagree about how a military system should be used, who decides? Right now, there is no answer other than the courts.
The Bottom Line
A government trying to coerce an AI company into removing its own ethical guardrails has, by overreaching, produced the opposite of what it wanted. Instead of quietly forcing Anthropic's compliance, it has drawn Microsoft, Google and OpenAI employees, Catholic ethicists, and nearly 150 former judges into a public coalition arguing that the judiciary must protect private companies from executive overreach on national security grounds.
That coalition does not exist because people love Anthropic. It exists because the statute the Pentagon invoked was written to stop Chinese saboteurs, and the government used it against an American company for refusing to enable autonomous kill systems. The judges who signed that brief understand the difference.
The hearing is March 24. The question before the court is narrow: should a preliminary injunction block the supply chain designation while the full case proceeds? The question it will actually answer is larger: can the executive branch use national security labels to compel private companies to remove their own product limits?
"The government has ample, well-established tools to resolve procurement disputes," the tech industry groups' brief states. "And to contract with providers on whatever terms it prefers." It chose not to use them.
Sources
- DOD says Anthropic's 'red lines' make it an 'unacceptable risk to national security' — TechCrunch (March 18, 2026)
- Defense Department says Anthropic poses 'unacceptable risk' to national security — Engadget (March 18, 2026)
- BIPARTISAN COALITION OF 149 FORMER FEDERAL AND STATE JUDGES FILES BRIEF SUPPORTING ANTHROPIC'S SUIT AGAINST THE DEPARTMENT OF DEFENSE — Democracy Defenders Fund (March 18, 2026)
- Former judges side with Anthropic and raise concerns about Pentagon's use of supply chain risk label — CNN Business (March 17, 2026)
- Anthropic sues Pentagon over being labeled a national security risk — The Washington Post (March 9, 2026)
- Anthropic Says U.S. Blacklist Could Cut 2026 Revenue by Multiple Billions — Yahoo Finance / Bloomberg (March 2026)
- Tech firms back Anthropic in Pentagon lawsuit — Axios (March 16, 2026)
- Microsoft files amicus brief in support of Anthropic's DoD lawsuit — Data Center Dynamics (March 2026)
- Google and OpenAI employees back Anthropic in its legal fight with the Pentagon — Fortune (March 10, 2026)
- Chinese narratives around Anthropic highlight contradictions for the US — Atlantic Council (March 2026)
- Pentagon's Anthropic Designation Won't Survive First Contact with Legal System — Lawfare (March 2026)
- What Hegseth's "Supply Chain Risk" Designation of Anthropic Does and Doesn't Mean — Just Security (March 2026)
- Trump administration defends Anthropic blacklisting in US court — Al Jazeera (March 18, 2026)