Home » Microsoft Takes on the Pentagon in Court to Protect Anthropic and the Future of Responsible AI

Microsoft Takes on the Pentagon in Court to Protect Anthropic and the Future of Responsible AI

by admin477351
Picture Credit: Rawpixel (Public Domain)

One of the world’s most powerful technology companies, Microsoft, has entered the courtroom to defend Anthropic against a sweeping Pentagon designation that the AI firm says threatens its very existence as a government contractor. Filing an amicus brief in a San Francisco federal court, Microsoft argued that a temporary restraining order was essential to prevent the collapse of critical supply chains that depend on Anthropic’s technology. The brief is part of a broader wave of industry support that also includes Amazon, Google, Apple, and OpenAI.

The conflict originated when Anthropic refused to sign a $200 million military contract without safeguards preventing the use of its AI for mass surveillance of Americans or for powering autonomous lethal weapons. The Pentagon, led by Defense Secretary Pete Hegseth, rejected these conditions and subsequently labeled Anthropic a supply-chain risk. This designation, unprecedented for a US company, has already caused the cancellation of several of Anthropic’s government contracts.

Microsoft’s close integration of Anthropic’s AI tools into military systems it provides to the federal government makes it a natural and invested party in this dispute. The company participates in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and maintains numerous additional service agreements with government agencies. Microsoft stated publicly that protecting both national security and responsible AI governance were goals the country must pursue together rather than treat as opposites.

Anthropic filed lawsuits in both a California federal court and the DC circuit court, arguing that the supply-chain risk label was a form of unconstitutional ideological punishment. The company contended that its ethical limits on Claude’s use in warfare were rooted in genuine technical uncertainty about the model’s safety in such environments. Court filings further showed that Anthropic does not currently believe Claude is reliable or safe enough for lethal autonomous operations.

Congress is now adding to the pressure with formal inquiries into whether AI was involved in a US military strike in Iran that reportedly killed more than 175 civilians at a school. The questions being raised on Capitol Hill mirror those at the heart of Anthropic’s lawsuit: who decides how AI is used in warfare, and what ethical guardrails must exist. Microsoft’s involvement in this case underscores just how consequential the answers to those questions will be for the entire technology industry.

You may also like