May 1, 2026 — The Google sign still stands at the edge of the Googleplex in Mountain View, flanked by electric shuttles and bike racks, glowing under morning fog. Inside, the company says it’s proud to serve the Pentagon. That pride comes with a new contract expansion, one that allows the Department of Defense to use Google’s AI platform Gemini for ‘any lawful purpose.’ The phrase appears in black text across a U.S. government procurement notice released April 29 — dry, precise, and loaded. It’s not just a licensing term. It’s a line drawn under a decade of internal rebellion, public promises, and a slogan Google once etched into its identity: ‘Don’t Be Evil.’
Key Takeaways
- Google has expanded its existing contract with the Department of Defense, permitting use of its AI platform Gemini for any lawful purpose.
- The company states it is “proud” to support national defense, a shift from its earlier resistance to military AI projects.
- This move reignites ethical concerns about AI in warfare, especially after Google’s 2018 backlash over Project Maven.
- The phrase ‘Don’t Be Evil’ no longer appears in Google’s corporate code of conduct, last formally removed in 2018.
- Gemini’s deployment under this clause could include logistics, intelligence analysis, or battlefield decision support — all within legal bounds defined by the DoD.
From Resistance to Full-Service Partner
In 2018, Google employees organized, petitioned, and walked out over Project Maven, a DoD initiative that used machine learning to analyze drone footage. The backlash was fierce, public, and internal. Engineers didn’t want their code enabling automated targeting systems. Google ultimately declined to renew Maven’s contract and introduced AI principles that excluded weapons and surveillance. It was a rare corporate retreat — one the company framed as ethical leadership.
But the pivot began quietly. By 2021, Google reentered the defense orbit through cloud contracts. It joined the $9 billion JEDI successor, the Joint Warfighting Cloud Capability (JWCC), alongside Microsoft and Amazon. These were infrastructure plays, Google argued — compute and storage, not decision-making. AI was off-limits.
Now, that line is gone. The updated agreement explicitly permits Gemini, Google’s most advanced AI model, to be used across DoD operations as long as the use is lawful. That’s not a loophole. It’s an invitation.
The Meaning of ‘Any Lawful Purpose’
The phrase ‘any lawful purpose’ does not appear in the original TechRadar report as hyperbole. It’s quoted directly from the contract documentation. And in legal terms, it’s broad — almost limitless. Lawful doesn’t mean ethical. It doesn’t mean transparent. It means compliant with U.S. law, military regulations, and executive orders as interpreted by the DoD’s own legal offices.
What could that include?
- Automated analysis of satellite imagery
- Real-time translation in war zones
- Predictive maintenance for military vehicles
- Resource allocation during humanitarian missions
- Support for command-and-control decision systems
It could also include AI-assisted targeting recommendations, surveillance pattern detection, or integration with autonomous weapons platforms — so long as those systems operate within existing legal frameworks. The distinction between support and deployment is thin. And it’s eroding.
Google’s Shifting Justification
The company’s messaging has evolved from defensive to assertive. In 2018, Sundar Pichai wrote that Google wouldn’t build AI for weapons. In 2026, a company spokesperson told TechRadar, ‘We’re proud to support the U.S. Department of Defense and its mission to protect national security.’
That pride is new. It’s not hedged. It’s not qualified. And it reflects a broader normalization of AI in defense tech — not just at Google, but across Silicon Valley. Palantir works with U.S. Special Operations. Anduril builds AI-driven drone swarms. Even OpenAI, once wary, has reportedly explored defense applications.
The Ghost of ‘Don’t Be Evil’
‘Don’t Be Evil’ was more than a slogan. For a generation of engineers, it was a covenant. It appeared in Google’s founding code of conduct. It was invoked in meetings, cited in resignation letters, and used to push back against ad targeting, censorship in China, and facial recognition.
It was officially retired in 2018, replaced by ‘Do the right thing’ — vaguer, softer, and easier to bend. The removal coincided with the Maven fallout. Critics called it a surrender. Supporters said it was realism.
Now, with Gemini cleared for Pentagon use under a blanket ‘lawful purpose’ clause, the shift is complete. The company no longer resists military AI. It enables it. And it does so proudly.
What Employees Are Saying
Internal sentiment is hard to measure, but not invisible. On anonymous tech forums, Google engineers have expressed unease. Some note that AI ethics reviews still exist — but they’re now part of a compliance process, not a veto power. Others point out that most Gemini work for the DoD is happening in cloud divisions, not core AI teams, creating psychological distance.
Still, the symbolic weight matters. When a company removes a moral guardrail, then signs a contract with near-total usage rights, the message is clear: ethics are contextual. Profit and patriotism can override principle.
This Isn’t Hypocrisy — It’s Evolution
Calling this reversal hypocrisy is too simple. Google isn’t lying. It’s adapting. The AI race is global. The Pentagon is a major funder. And U.S. policy increasingly treats AI dominance as a national security imperative.
In that context, opting out isn’t just bad business — it’s seen as unpatriotic. Google Cloud’s leadership, including CEO Thomas Kurian, has pushed hard to win government contracts. They’ve hired ex-DoD officials. They’ve built FedRAMP-compliant systems. They’ve positioned Google as a trusted partner, not a rebellious upstart.
That transformation required shedding old identities. ‘Don’t Be Evil’ didn’t scale. It couldn’t survive board meetings, procurement cycles, or geopolitical competition. What’s replacing it? A new doctrine: responsible use, within legal limits, in service of national interest.
What This Means For You
If you’re building AI systems today — whether at a startup, a cloud provider, or a research lab — Google’s move sets a precedent. Ethical boundaries are no longer set by engineers or manifestos. They’re negotiated in government contracts, shaped by compliance teams, and defined by what’s legally permissible.
That means your models could end up in defense applications whether you intend it or not. If your API is public, if your cloud is government-certified, if your company takes federal money, the path from research to battlefield is shorter than you think. Your code might not pull a trigger — but it might help decide where one gets pointed.
So ask the hard questions now. Not just ‘can we build this?’ but ‘who will use it, under what rules, and what happens when those rules change?’
Google once believed it could shape the world without becoming part of its machinery. Now it’s helping maintain it. The real question isn’t whether AI should serve the military — it’s whether any tech company can still afford to say no.
How DoD AI Spending Is Reshaping Tech Priorities
The Pentagon’s budget for AI and machine learning has grown steadily since 2020. In fiscal year 2026, the Department of Defense requested over $1.4 billion for AI-related programs through the Defense Artificial Intelligence Program (DAIP), a 22% increase from 2022. But that number understates the full investment. Billions more flow through service-specific initiatives — the Air Force’s Replicator program, the Army’s AI Task Force, and the Navy’s Project Overmatch. These initiatives rely on commercial AI models, cloud platforms, and data pipelines built by private firms.
Google isn’t alone in chasing this funding. Microsoft’s Azure holds a $480 million contract with the Defense Information Systems Agency to deploy AI tools for battlefield logistics and cyber defense. Amazon Web Services supports the CIA’s classified cloud and has delivered AI-powered analytics to U.S. Special Operations Command. These contracts aren’t just about revenue — they’re about access. Access to real-world operational data, military feedback loops, and a regulatory shield that comes with government endorsement.
For Google, winning JWCC task orders gave it a foothold. Now, enabling Gemini across DoD systems accelerates integration. Unlike pure infrastructure deals, AI contracts influence how decisions are made. That proximity to operational workflows changes the company’s relationship with the military — from vendor to partner.
And the incentives are structural. Federal grants, research partnerships with DARPA, and procurement contracts create long-term dependencies. Once a company’s AI is embedded in command systems, retreat becomes technically and financially costly. The line between collaboration and complicity isn’t crossed overnight. It’s worn down by incremental contracts, compliance certifications, and quiet engineering adjustments.
The Bigger Picture: AI, Geopolitics, and the New Tech Consensus
This shift isn’t just about Google. It’s about a broader recalibration of Silicon Valley’s role in national security. After years of tension following Snowden-era surveillance revelations and social media’s role in election interference, tech companies are being pulled back into the national defense orbit — this time by AI.
China’s aggressive investment in military AI is a key driver. The People’s Liberation Army has tested AI-powered drone swarms, facial recognition in Xinjiang, and automated decision systems for naval operations. Beijing’s 2017 Next Generation Artificial Intelligence Development Plan called for AI dominance by 2030. In response, the U.S. has framed AI competition as existential. The National Security Commission on Artificial Intelligence, chaired by Google alum Eric Schmidt, warned in 2021 that failure to act would lead to “a permanent strategic disadvantage.”
That thinking has reshaped procurement. The DoD now fast-tracks AI adoption through entities like the Defense Innovation Unit (DIU), which has funded startups including Shield AI and SparkCognition. In 2024, the Pentagon launched the Replicator initiative to field thousands of autonomous systems within 18 months. Google’s Gemini could support those efforts in target recognition, sensor fusion, or autonomous coordination — all under the umbrella of “lawful use.”
The result is a quiet consensus: resisting military AI is no longer a tenable position for major tech firms. Even companies that once vowed to stay out are shifting. In 2023, Meta declined to renew a DoD research grant. By 2025, it began informal talks about AI collaboration through its Fundamental AI Research (FAIR) team. OpenAI, despite its initial charter, now hosts classified government teams on isolated Azure instances managed by Microsoft. The firewall is gone. The only debate left is about the terms of engagement.
Competing Visions: How Other Tech Giants Are Navigating the Line
Google’s journey mirrors a wider industry pattern — but not all companies are moving in lockstep. Microsoft, for example, has embraced defense work more openly. In 2023, it signed a $21.9 billion Integrated Visual Augmentation System (IVAS) contract to supply augmented reality headsets to the Army. The project, based on HoloLens, faced criticism over reliability and ethical risks, but Microsoft defended it as “enhancing soldier safety.” The company maintains an internal AI ethics board, but its recommendations are advisory. The final call rests with business and legal teams.
Amazon takes a more compartmentalized approach. AWS dominates government cloud with over 60% of federal contracts, including work for ICE and Customs and Border Protection. But Amazon’s core AI research, like its Alexa division, remains largely separate from defense projects. That separation allows the company to avoid the internal backlash Google faced. Still, AWS’s SageMaker platform is used by defense contractors for machine learning — meaning Amazon’s tools are indirectly embedded in military systems.
Then there’s Palantir, which built its entire business on government intelligence. Its AI platform, AIP, is already deployed across U.S. combatant commands for predictive targeting and supply chain modeling. Unlike Google, Palantir never promised to stay out of weapons systems. Its stance is simpler: the U.S. government should have the best technology, full stop. That clarity attracts defense clients — and sharp criticism from digital rights groups.
Google’s path is distinct. It began with resistance, then compromise, and now full participation. That arc gives it more internal tension than its peers. But it also gives the company political cover — it didn’t rush in. It was “asked back.” That narrative matters. It turns a reversal into a return, a concession into a duty.
Sources: TechRadar, Politico


