Microsoft has entered Anthropic’s high-profile legal fight against the Pentagon with a clear message: AI ethics and defense capability cannot be treated as separate concerns, and a company should not be punished for believing that. The company filed an amicus brief in a San Francisco federal court supporting Anthropic’s request for a temporary restraining order against the Pentagon’s supply-chain risk designation. Amazon, Google, Apple, and OpenAI have also backed Anthropic through their own supporting brief, making the case a focal point for the industry’s stance on responsible AI.
The dispute began when Anthropic refused to sign a $200 million contract that would have deployed its AI on classified military systems without restrictions on its use for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk after talks collapsed, triggering the cancellation of Anthropic’s government contracts. The Pentagon’s technology chief later stated categorically that there was no chance the agency would renegotiate with Anthropic.
Microsoft’s brief is backed by concrete business interests: the company uses Anthropic’s AI in military systems and is a partner in the Pentagon’s $9 billion cloud computing contract, as well as several additional federal agreements worth billions more. The company argued that the technology industry and the government must find a path to pursue both technological excellence and responsible AI governance together. Microsoft’s public statement framed the issue in terms of shared national interest rather than corporate self-interest.
Anthropic’s lawsuits in California and Washington DC argued that the supply-chain risk designation was an act of unconstitutional retaliation for the company’s public advocacy of responsible AI development. The company’s court filings revealed that it does not have confidence in Claude’s ability to safely operate in lethal autonomous environments, which it said was the real basis for its contract requirements. Anthropic pointed out that this designation had never before been applied to a US company.
Congress is also demanding answers about AI in military targeting following reports that a US strike in Iran killed more than 175 civilians at an elementary school. Lawmakers have formally asked whether AI tools were involved in selecting the target and what human review processes were followed. Together, the congressional inquiries and Anthropic’s lawsuits are creating a watershed moment for US policy on the role of artificial intelligence in warfare and national security.
