Skip to content

Pentagon Calls Anthropic an "Unacceptable Risk." 149 Judges Disagree.

DS
LDS Team
Let's Data Science
13 minAudio · 2 listens
Listen Along
0:00/ 0:00
AI voice
On the same day the DOD filed its harshest legal attack yet — labeling Anthropic a threat to warfighting operations — a bipartisan coalition of 149 former federal and state judges filed a brief calling the Pentagon's maneuver unlawful and demanding the courts intervene.

Tuesday evening, as most of Washington was wrapping up its workday, the Department of Defense filed a 40-page brief in a California federal court arguing that Anthropic could attempt to "disable its technology or preemptively alter the behavior of its model" in the middle of active military operations. The government's position: a private AI company that has drawn ethical lines around autonomous weapons and domestic surveillance cannot be trusted inside the U.S. military supply chain.

Within hours, the Democracy Defenders Fund released a counter-filing. One hundred and forty-nine former federal and state judges — appointed by both Republican and Democratic presidents — had signed an amicus brief declaring the Pentagon's legal reasoning defective and calling on the judiciary to step in.

The same government action meant to silence Anthropic had, in the span of a single day, drawn the largest coalition of former jurists ever assembled in an AI case.

For context: This is the fourth article in our series covering this dispute. Read the earlier coverage: the initial ethics showdown in February, the February 27 deadline Anthropic refused, and the lawsuits filed March 9 that launched this legal fight.

Feb 19, 2026
Negotiations collapse
Pentagon demands Anthropic allow Claude for "all lawful" uses. Anthropic refuses on two points: autonomous weapons and domestic mass surveillance.
Feb 27, 2026
Trump orders agencies to stop using Anthropic
President Trump directs all federal agencies to "IMMEDIATELY CEASE" use of Anthropic technology. Defense Secretary Hegseth designates the company a "supply chain risk."
Mar 9, 2026
Anthropic files two lawsuits
Suits filed in federal court in San Francisco and the D.C. Circuit, alleging First Amendment violations and misuse of the supply chain risk statute. CFO Krishna Rao warns the designation could reduce 2026 revenue "by multiple billions of dollars."
Mar 10–16, 2026
Industry rallies behind Anthropic
Microsoft files an amicus brief. More than 30 employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Dean — file a separate brief. Catholic ethicists file their own. Major tech industry groups call for a pause on the designation.
Mar 17–18, 2026
149 former judges file brief — DOD strikes back
A bipartisan coalition of 149 former federal and state judges declares the Pentagon's action "substantively and procedurally unlawful." Hours later, DOD files its first rebuttal, calling Anthropic an "unacceptable risk to national security."
Mar 24, 2026
Preliminary injunction hearing
Federal court hears Anthropic's request to temporarily block the supply chain designation while the case proceeds.

The DOD's Argument: A Company With Ethics Is a Liability

The Pentagon's filing lands on a single, striking logic: because Anthropic has publicly stated ethical limits, it cannot be trusted in a war zone.

The 40-page brief argues that Anthropic might "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate 'red lines' are being crossed." The government's worry is not that Anthropic has done this. It is that Anthropic might, because the company has already shown it is willing to draw lines.

The filing is the DOD's first formal legal rebuttal since Anthropic filed suit on March 9.

The roots of the dispute go back to a $200 million contract Anthropic signed with the Pentagon last summer to deploy Claude within classified systems. Negotiations later broke down over two specific points. Anthropic refused to allow Claude to be used in fully autonomous lethal weapons systems where no human operator remains involved in targeting or firing. It also refused to allow mass domestic surveillance of American citizens. The Pentagon wanted authorization for "all lawful" uses with no carve-outs.

Pentagon Undersecretary Emil Michael put the military's position bluntly: "If you have a Chinese hypersonic missile coming in, you may have 90 seconds to respond." The argument is that autonomous AI response times, not ethics, win future wars.

The 149 Judges: "Courts Have the Authority and Duty to Intervene"

The day before the DOD filed its rebuttal, a coalition organized by the Democracy Defenders Fund filed what may be the most unusual brief in this already unusual case.

One hundred and forty-nine former federal and state judges, appointed by presidents from both parties, argued that the DOD had misread the statute it invoked. The supply chain risk law, they said, is not a catch-all tool for procurement disputes. It is a narrow statute written to address malicious conduct — specifically "sabotage, unauthorized data extraction, or the introduction of unwanted functions by hostile actors."

Anthropic has not sabotaged anything. It has negotiated over contract terms. The judges' brief argues those are entirely different things, and that the Pentagon failed to follow the mandatory procedural requirements the statute demands before action can be taken.

The brief makes the case that courts do not have to defer to the executive branch merely because it invokes national security. "Courts have the authority and duty to intervene when the administration invokes national security concerns," the brief states, a direct challenge to the doctrine of broad executive deference in defense matters.

The coalition joins a remarkable list of backers that has assembled around Anthropic's legal challenge: Microsoft, Google and OpenAI employees (including chief Google scientist Jeff Dean), Catholic moral theologians, major tech industry groups, and former senior national security officials.

The Government's Supporters: Why Some Agree With the Pentagon

The "unacceptable risk" framing has defenders, and their arguments deserve a clear hearing.

National security hawks note that the underlying question is real: can the U.S. military depend on a private AI vendor that has reserved the right to define when its own technology may or may not be used? Paul Scharre, director of technology and national security at the Center for a New American Security, has noted that military operational dependency on commercial AI creates a genuine strategic vulnerability if vendors retain unilateral override authority.

Some critics argue Anthropic's position is less principled than it appears. The company drew its lines in negotiations over a government contract, not in a public policy forum. That looks, to some, like a company trying to use legal frameworks to win a commercial dispute it lost at the bargaining table.

Armed Forces Press published a piece titled "The Ideological Contamination of the Arsenal," arguing that allowing AI vendors to selectively opt out of military use cases on ethical grounds creates dangerous precedent for the entire defense supply chain. If Anthropic can say no to autonomous weapons, what stops the next company from saying no to targeting software, surveillance satellites, or nuclear command-and-control systems?

The DOD's worry about a company potentially disabling its own technology mid-operation is not purely hypothetical. Remote kill switches and model updates are real features of cloud-deployed AI. Operational dependency on a vendor that maintains that capability is a genuine risk, whatever the company's current intentions.

The Financial Damage Is Already Landing

While the legal arguments play out in court, the commercial damage is compounding.

CFO Krishna Rao, in a sworn declaration filed with the lawsuit, testified that the designation could reduce Anthropic's 2026 revenue "by multiple billions of dollars" and that the harm would be "almost impossible to reverse." Even under a conservative reading limited to direct government contracts, Rao said hundreds of millions in projected 2026 revenue are at risk.

The real-world losses have already started. One partner with a multi-million-dollar annual agreement replaced Claude with a competing AI system for an FDA-connected deployment, eliminating a revenue pipeline worth more than $100 million. Three financial institution negotiations have stalled as well — including one deal near closure and two others whose combined value exceeds 80 million dollars.

The designation has effectively served as a warning to any enterprise client with government ties: working with Anthropic carries political risk. That chilling effect was arguably the point.

Amazon and Google, Anthropic's two largest investors, have not taken the company's side publicly. Amazon invested $8 billion in the company; Google's stake came from a 3 billion dollar commitment. Both have said they will continue offering Anthropic's AI through their cloud platforms, but only for non-defense deployments. Amazon CEO Andy Jassy met with Hegseth and declined to back Anthropic when the dispute came up. Google, meanwhile, is deploying Gemini AI across the Pentagon's entire 3 million-person workforce — precisely the contract Anthropic walked away from.

The China Problem Nobody in Washington Wants to Discuss

The dispute has generated an uncomfortable irony on the international stage.

Anthropic has spent years arguing in Washington that Chinese AI companies pose a national security risk because Beijing can compel them to share data or modify systems without transparency. The Trump administration has now made an identical argument about Anthropic: a company that might alter its technology based on its own judgment cannot be trusted in classified systems.

Chinese state media and analysts have leaned into this with evident relish. Atlantic Council researchers tracking PRC commentary note the state-affiliated press has framed the episode as proof that the American AI ecosystem is structurally unreliable and that China's military-civil fusion model offers institutional clarity the U.S. lacks.

Chatham House and the Council on Foreign Relations have both published pieces arguing the dispute reveals governance failures with consequences well beyond Washington. If the U.S. government treats its own leading AI lab as a national security threat, the argument goes, it signals to allies and competitors alike that American AI infrastructure is politically unstable.

What Happens Next

A hearing on Anthropic's request for a preliminary injunction — a temporary court order blocking the supply chain designation while the case is heard — is scheduled for March 24.

Lawfare, which covers national security law, published an analysis arguing the Pentagon's designation is unlikely to survive judicial review. The supply chain risk statute was written for foreign adversary threats, not domestic contract disputes, and the procedural requirements for its invocation were not followed.

But even if Anthropic wins in court, the precedent question lingers. The case has surfaced a gap in AI governance that no existing law addresses cleanly: when an AI company and the government disagree about how a military system should be used, who decides? Right now, there is no answer other than the courts.

The Bottom Line

A government trying to coerce an AI company into removing its own ethical guardrails has, by overreaching, produced the opposite of what it wanted. Instead of quietly forcing Anthropic's compliance, it has drawn Microsoft, Google and OpenAI employees, Catholic ethicists, and nearly 150 former judges into a public coalition arguing that the judiciary must protect private companies from executive overreach on national security grounds.

That coalition does not exist because people love Anthropic. It exists because the statute the Pentagon invoked was written to stop Chinese saboteurs, and the government used it against an American company for refusing to enable autonomous kill systems. The judges who signed that brief understand the difference.

The hearing is March 24. The question before the court is narrow: should a preliminary injunction block the supply chain designation while the full case proceeds? The question it will actually answer is larger: can the executive branch use national security labels to compel private companies to remove their own product limits?

"The government has ample, well-established tools to resolve procurement disputes," the tech industry groups' brief states. "And to contract with providers on whatever terms it prefers." It chose not to use them.

Sources

Practice with real Telecom & ISP data

90 SQL & Python problems · 15 industry datasets

250 free problems · No credit card

See all Telecom & ISP problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths