News
Mar 6, 2026
Tech Updates
Government
Artificial Intelligence
Americas
NewDecoded
3 min read

Image by Anthropic
The Pentagon has formally designated Anthropic as a supply chain risk following a high-stakes standoff over the military use of its Claude AI model. This move comes after the company refused to remove safety restrictions that prohibited the technology from being used in autonomous weapons systems. Consequently, federal agencies and defense contractors must now purge Anthropic products from their workflows.
The conflict originated from a July 2025 contract where Claude became the first frontier model approved for classified networks. Anthropic insisted on an acceptable use policy that barred mass surveillance and lethal autonomous operations. The Pentagon sought to renegotiate these terms to allow use for all lawful purposes, but the parties failed to reach an agreement before a February 2026 deadline.
In the wake of this breakdown, the Department of Defense has pivoted toward rival OpenAI to fill the technological void. OpenAI has reportedly struck a deal that aligns with the Pentagon's requirements, granting the military broader latitude for AI deployment on classified networks. This shift leaves Anthropic sidelined from the most lucrative defense contracts in the artificial intelligence sector.
Secretary of Defense Pete Hegseth and President Donald Trump have invoked authorities typically reserved for foreign adversaries to enforce this ban. Under the Federal Acquisition Supply Chain Security Act, contractors are required to report any use of Anthropic technology within three business days. Failure to comply could lead to severe civil or criminal consequences for the defense industrial base.
Anthropic is expected to challenge this designation in court, arguing that a policy disagreement does not constitute a national security threat. The company maintains that its restrictions are essential for the safe development of artificial intelligence. However, the immediate removal from platforms like USAi.gov signals a difficult road ahead for the startup.
This designation marks a pivotal moment where the U.S. government treats domestic policy non-compliance with the same severity as foreign espionage. By labeling a leading AI developer a supply chain risk over ethical guardrails, the administration is forcing a corporate capital punishment that sets a chilling precedent for the industry. Companies are now faced with a stark choice: align their ethical frameworks with military objectives or risk total exclusion from the federal marketplace.
Related Articles