19.6 C
Los Angeles
Wednesday, March 18, 2026, 07:49 AM
HomeTechnologyMicrosoft Backs Anthropic With Court Filing as Pentagon's AI Ethics War Threatens...

Microsoft Backs Anthropic With Court Filing as Pentagon’s AI Ethics War Threatens to Escalate Further

Date:

Related stories

Transparency Protocols to Standardize the $21 Billion Resale Authentication Market

The global journey toward a transparent and circular fashion...

Sinner’s Indian Wells Triumph Leaves No More Mountains to Climb on Hard Courts

Jannik Sinner has climbed every mountain hard-court tennis has...

Trump Sticks Knife Into NATO After Iran Military Wins Prove Self-Sufficiency

President Donald Trump stuck the knife into NATO on...

Fitness Guidance That Cuts Through the Noise: 15 Rules for Rapid Fat Loss

In a fitness landscape filled with contradictory advice and...

Trump Seeks Delay in China Visit Amid Iran War Tensions

Donald Trump has requested to delay his planned visit...

As the Pentagon’s AI ethics war with Anthropic threatens to escalate further, Microsoft has backed the AI company with a court filing in a San Francisco federal court calling for a temporary restraining order against the supply-chain risk designation. The brief argued that allowing the designation to stand would cause serious harm to the technology networks supporting national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic, making this a comprehensive industry response to an escalating confrontation.
The confrontation has been escalating since Anthropic refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, and the Pentagon’s technology chief publicly ruled out renegotiation, signaling the government’s intent to hold firm. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s backing is grounded in its direct integration of Anthropic’s technology into federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. Additional agreements with government agencies worth several billion dollars more further deepen the company’s stake. Microsoft publicly argued that responsible AI governance and national security required collaboration between government and the technology sector.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that no US company had ever previously received this designation.
Congressional Democrats have separately pressed the Pentagon for answers about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal inquiries are adding legislative pressure to an already escalating confrontation. Together, Microsoft’s backing, the industry coalition, and congressional scrutiny are creating a formidable force that may ultimately prevent the Pentagon’s AI ethics war from escalating further.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories