Tech Updates

Enterprise

Government

Artificial Intelligence

Google and OpenAI Employees Back Anthropic in Legal Challenge Against Department of War

Major AI competitors are uniting to support Anthropic's lawsuit against the Pentagon's controversial supply chain risk designation.

Major AI competitors are uniting to support Anthropic's lawsuit against the Pentagon's controversial supply chain risk designation.

NewDecoded

Published Mar 10, 2026

Mar 10, 2026

3 min read

Image by Wesley Tingey

Google and OpenAI have signaled their support for Anthropic as the AI startup prepares to sue the U.S. Department of War. This collective front follows the official designation of Anthropic as a national security supply chain risk after the company refused to lift ethical restrictions on military AI use. The tech industry appears increasingly concerned about the government's use of security designations as leverage in commercial negotiations.

The dispute centers on Anthropic's refusal to allow its Claude models to be used for mass domestic surveillance or fully autonomous lethal weaponry. In response, Secretary of War Pete Hegseth and the President ordered a federal purge of Anthropic technology, claiming the company's red lines compromised national readiness. This led to an official letter on March 4 confirming the supply chain risk status.

CEO Dario Amodei clarified that while the administration's rhetoric is broad, the underlying statute at 10 USC 3252 is legally narrow. He argues the law requires the government to use the least restrictive means possible, meaning the ban should only apply to direct Pentagon contracts rather than all business relationships. Anthropic maintains that the action is not legally sound and sees no choice but to challenge it in court.

Adding to the tension, Amodei recently apologized for a leaked internal post that criticized the administration and rival OpenAI. While that post was written in the heat of the initial announcement, Amodei now emphasizes that Anthropic has much more in common with the Department of War than differences. The company remains committed to a shared premise of defending US national security while maintaining its core safety exceptions.

Despite the legal friction, Anthropic has committed to providing support to frontline warfighters at a nominal cost during a transition period. This ensures that critical national security operations involving intelligence analysis and operational planning are not abruptly disrupted by the political standoff. The company will continue to provide models and engineering support for as long as they are permitted to do so.


Decoded Take

Decoded Take

Decoded Take

This unprecedented alliance between fierce rivals Google, OpenAI, and Anthropic underscores a growing fear that the federal government is attempting to override private AI ethics. By designating a domestic firm as a supply chain risk over a policy disagreement, the Department of War is testing its ability to force private entities into compliance with military mandates. If the lawsuit fails, the least restrictive means clause of 10 USC 3252 may be effectively nullified, allowing the executive branch to blacklist any tech company that refuses to modify its core safety principles for government use. This case will likely define the boundaries of corporate autonomy versus national security mandates for decades to come.

Share this article

Related Articles