Skip to content

600 Google Employees Asked Pichai Not to Sign the Pentagon Deal. On Tuesday, He Signed It Anyway.

DS
LDS Team
Let's Data Science
11 min
Anthropic refused the Pentagon's classified work in February. On April 28, Google signed a contract letting the Defense Department use Gemini for "any lawful government purpose." Hundreds of its own scientists had warned against it twenty-four hours earlier.

The letter landed on Sundar Pichai's desk on Monday, April 27. More than 600 Google employees had signed it, including senior researchers from DeepMind, Cloud, and the company's AI policy team. The signatories included over 18 senior staff and at least 20 directors, senior directors, and vice presidents who had attached their names openly.

Their request was simple. Do not let Gemini be used in classified military operations.

Twenty-four hours later, the company did exactly that.

On Tuesday, April 28, Pentagon AI chief Cameron Stanley confirmed to CNBC that the Department of Defense had amended its contract with Google to extend Gemini access to classified networks. The new terms allow the U.S. military to use Google's most powerful AI model for "any lawful government purpose." A few hours later, on the same day, Google quietly notified the Pentagon it was dropping out of a separate $100 million drone swarm prize challenge it had previously qualified for.

Eight years ago, an internal employee revolt over Project Maven made Google walk away from a much smaller military contract. This week, a larger revolt produced the opposite outcome.

Anthropic Said No. Google Said Yes.

The chain of events that led to Tuesday's deal began two months earlier, on February 27, when Anthropic CEO Dario Amodei publicly refused to expand the company's Pentagon work into classified deployments. Within weeks, the Defense Department designated Anthropic a "supply chain risk" and stopped contracting with the company.

The vacuum that opened was the largest single AI procurement opportunity in the federal government. Google moved into it.

According to reporting from Bloomberg, the Washington Post, and CNBC, Google had spent the early months of 2026 negotiating language meant to limit Gemini's classified use. The company's draft contract proposed bans on autonomous weapons and domestic mass surveillance without human oversight. The Pentagon rejected those bans as binding clauses. The final language permits Gemini to be used for any purpose the federal government considers lawful, with Google's restrictions reduced to non-binding recommendations the government can choose to ignore.

In Cameron Stanley's first public comments on the deal, the Pentagon's chief digital and AI officer told CNBC: "Overreliance on one vendor is never a good thing." He extended the metaphor in his own words: "You don't cook a Thanksgiving turkey in the microwave. You need to have the right technology for the right use case to achieve the right outcome."

Stanley's comments arrived four months into his tenure. He was named CDAO in January 2026 and inherited a Pentagon AI strategy already strained by the Anthropic blacklist. The Google deal is his first major procurement decision.

How It Unfolded

FEBRUARY 2025
Google removes weapons pledge from AI principles
DeepMind CEO Demis Hassabis co-authors a blog post citing "a global competition taking place for AI leadership" as the reason for striking the no-weapons clause.
FEBRUARY 27, 2026
Anthropic refuses Pentagon classified work
Dario Amodei publicly walks away from a multi-million-dollar Pentagon contract on principle.
MARCH 18, 2026
Pentagon designates Anthropic an "unacceptable risk"
DOD adds Anthropic to its supply chain risk list, effectively blacklisting the company from future contracts.
MARCH 2026
Google deploys Gemini to Pentagon's unclassified workforce
Three million Defense Department employees gain access to Gemini agents on unclassified networks.
APRIL 27, 2026
Google employee letter to Pichai
More than 600 employees, including 18+ senior staff and 20+ directors and VPs, sign an open letter calling on the CEO to reject classified Pentagon work.
APRIL 28, 2026
Google signs the classified deal
Pentagon AI chief Cameron Stanley confirms Gemini is now available for "any lawful government purpose" on classified networks.
APRIL 28, 2026
Google exits $100M drone swarm contest
On the same day as the classified deal, Google notifies the Pentagon it is withdrawing from a $100M competition to build voice-controlled autonomous drone swarms, citing "resourcing."

What the Letter Said

The April 27 employee letter was specific. The signatories did not call for Google to abandon government work entirely. They asked for a contractual line on what Gemini could be used for in classified settings.

"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," the letter read, according to copies obtained by the Washington Post and Euronews. The signatories wrote that this category "includes lethal autonomous weapons and mass surveillance, but extends beyond."

The letter argued that the only way to enforce restrictions on military AI use was to refuse classified work entirely. Once Gemini queries are running on air-gapped Pentagon networks, the signatories noted, Google has no visibility into what is being asked of the model, what is generated, or how those outputs are used.

One DeepMind senior research scientist quoted in IBTimes UK was harsher: "I think this violates 'don't be evil' quite clearly on many levels." The same researcher said they felt "incredibly ashamed right now to be Senior Research Scientist at Google DeepMind."

The pattern echoes Project Maven in 2018, when about 4,000 Google employees signed a similar petition over a Pentagon contract using AI to analyze drone footage. That earlier protest worked. Google declined to renew the Maven contract and published a set of AI principles in June 2018 that explicitly prohibited weapons development.

Those principles no longer exist in their original form. In February 2025, Google revised its AI rules and removed the weapons and surveillance prohibitions. The blog post announcing the change, co-authored by DeepMind CEO Demis Hassabis and Google's senior policy lead, cited geopolitical competition as the reason. The April 28 deal is the first major procurement built on the revised principles.

What Engineers Should Watch

For engineers and AI practitioners, the contract terms matter more than the politics. Three details from the reporting deserve attention.

The first is the "any lawful government purpose" clause. This phrase, used by Stanley and confirmed in Bloomberg's reporting, means the federal government, not Google, decides what counts as a permitted use case. Google can recommend restrictions but cannot enforce them on classified networks where it has no observability.

The second is the air-gapped deployment model. Defense Department classified networks do not allow vendors to inspect query logs or output. Once a Gemini variant is loaded onto a SIPR or JWICS host, Google's safety filters and monitoring tooling are running in a black box from Mountain View's perspective.

The third is the shift in vendor concentration. Stanley said the Pentagon now wants more, not fewer, AI suppliers and is actively contracting with OpenAI and other vendors to modernize wartime capabilities, according to his CNBC interview. Anthropic remains excluded.

VendorPentagon Status (Apr 29, 2026)Classified AccessNotable Detail
GoogleActive contract, expanded April 28YesNon-binding usage recommendations only
OpenAIActive contract since early 2026YesConfirmed by Stanley as continuing partner
AnthropicBlacklisted as supply chain riskNoRefused classified work on principle

The Other Side

Defense officials and several industry analysts argued the deal is necessary and overdue. Stanley's "Thanksgiving turkey" comment was aimed at critics inside the Pentagon who had pushed for a single-vendor strategy with OpenAI. By his framing, Tuesday's expansion is risk reduction, not escalation.

Former Pentagon procurement officials cited in TechCrunch and Android Authority noted that classified workflows already use AI tools from established defense contractors and that excluding the most capable commercial models from those workflows leaves U.S. operators behind state actors who are not asking permission. One unnamed Defense official told Bloomberg that the "any lawful government purpose" clause is standard language across major DOD software contracts and is not unique to AI.

Google leadership has not publicly responded to the open letter. The company's only on-record statement, given to multiple outlets, said Gemini deployment in classified settings is consistent with the company's revised AI principles and that Google maintains the right to recommend specific use restrictions. The company did not commit to publishing those recommendations or to disclosing whether the Pentagon followed them.

There is also a separate argument worth presenting fairly. The Pentagon's drone swarm program, which Google exited on the same day it signed the classified deal, was a more directly weapons-related contract than the language model integration. Some defense analysts read the same-day pairing as a deliberate ethical line, with Google accepting general-purpose classified work while declining a specific autonomous-weapons program. Others read it as cover, since Google had already qualified in the drone competition before withdrawing on what it described as "resourcing" grounds.

The Bottom Line

The shape of military AI procurement in the United States changed on Tuesday. For the first time, the most powerful general-purpose language model from a U.S. company is approved for classified Defense Department work under terms that give the government, not the developer, final say over use cases.

The decision was made over the explicit, signed objection of more than 600 Google employees, including the company's most senior AI scientists. It was made two months after a competing AI lab walked away from the same contract on principle, and was paid for that refusal with a "supply chain risk" designation. It was made by a company that, this same week, agreed to invest up to $40 billion in that competing lab.

The practitioner question is harder than the political one. Every engineer building on Gemini, on Vertex AI, or on Google Cloud's enterprise stack is now building on the same foundation that powers a classified Pentagon deployment whose use cases neither they nor Google can see. That is a new condition. It deserves to be reasoned about, not assumed away.

As one of the open letter's authors put it in a Slack channel after the deal was confirmed, in a quote published by Euronews: "Management redrew the map. The line we asked for isn't on it."

Sources

Practice with real Ad Tech data

90 SQL & Python problems · 15 industry datasets

250 free problems · No credit card

See all Ad Tech problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths