Google has made a dangerous U-turn on military AI

The End of Google’s ‘Don’t Be Evil’ Era: AI, Ethics, and the Military Machine

LONDON: The days of Google’s famous “Don’t Be Evil” mantra are well and truly over.

In 2018, the company softened its guiding principle to “Do The Right Thing.” Now, its parent company, Alphabet, has taken an even more significant step back from ethical commitments—by quietly removing a crucial pledge: a vow not to develop artificial intelligence for weapons or surveillance.

This week, Google erased that promise from its “Responsible AI” principles. Instead of a clear ethical stance, the company’s AI chief, Demis Hassabis, framed the change as an inevitable evolution, writing in a blog post that AI “is becoming as pervasive as mobile phones” and has “evolved rapidly.”

AI on the Battlefield: A Dangerous Precedent

But should ethics “evolve” to match business opportunities? That’s a dangerous assumption. Hassabis argues that the world has grown more complex, yet abandoning an ethical stance on AI’s role in warfare could lead to catastrophic, unintended consequences.

With AI-driven systems operating at machine speed, military conflicts could escalate before human diplomats even have time to react. Automated decision-making in warfare could make conflicts deadlier, not cleaner, despite the illusion of precision. AI is still fallible—mistakes could cost civilian lives, and the hands-off approach of some military leaders might increase reliance on automated warfare.

The fundamental concern isn’t just AI-enhanced weaponry—it’s the shift in decision-making power. Unlike previous technologies that simply made militaries more efficient, AI threatens to replace human judgment with algorithms when deciding who lives and who dies.

From Ethical Champion to Military Partner

Perhaps the most unsettling part of Google’s reversal is its hypocrisy. Just a few years ago, the company’s leadership, including Hassabis, proudly signed a pledge never to work on autonomous weapons. More than 2,400 AI researchers and employees joined that commitment. Today, that promise has vanished.

William Fitzgerald, a former Google policy team member and co-founder of the Worker Agency, recalls how Google resisted military entanglements in 2018. When the company partnered with the U.S. Department of Defense on Project Maven—a program to enhance drone footage analysis with AI—employees fought back. Some 4,000 workers signed a petition stating, “Google should not be in the business of war,” and a dozen resigned in protest. The company eventually backed down and didn’t renew the contract.

But Fitzgerald now sees that moment as an exception rather than a rule. Since then, the industry has shifted. OpenAI has partnered with defense contractor Anduril Industries. Anthropic, which markets itself as a safety-first AI lab, recently joined forces with Palantir Technologies to offer its AI services to defense clients.

Google, which once tried to create AI oversight, has dismantled its own guardrails. It dissolved an ethics board in 2019 and fired two leading AI ethics directors a year later. The company—and Silicon Valley at large—has drifted so far from its original principles that they’re no longer visible in the rearview mirror.

What Needs to Happen Now?

With Google and other AI giants caving to military interests, the responsibility for ethical oversight now falls on global leaders. Policymakers meeting next week must push for legally binding regulations before the political and financial pressures make such measures impossible.

The solutions aren’t complex:

  • Human oversight should be mandatory for all AI military systems.
  • Fully autonomous weapons must be banned—no AI should decide to take a human life without human approval.
  • AI systems should be auditable, ensuring transparency and accountability.

One promising framework comes from the Future of Life Institute, a think tank once funded by Elon Musk and led by MIT physicist Max Tegmark. It proposes regulating AI weapons like nuclear technology—requiring clear, verifiable safety measures before deployment.

Governments should also consider establishing an international AI oversight body, akin to the International Atomic Energy Agency, to enforce these safety standards. Companies and nations that violate them must face real consequences, including sanctions.

The Final Warning

Google’s reversal is a stark reminder: corporate ethics crumble under pressure when profits and political power are on the line. The era of self-regulation in AI is over, and if we don’t act now, AI-driven warfare may spiral out of control.

There is still time to set global rules and safeguards. But if we fail, the consequences could be irreversible. AI’s darkest risks aren’t science fiction—they are already knocking at our door.