When the Trump administration severed ties with Anthropic over the AI company’s insistence on ethical safeguards, OpenAI wasted no time positioning itself as the government’s preferred alternative. CEO Sam Altman announced a new deal with the Department of Defense just hours after Anthropic was effectively blacklisted from federal use.
The dispute at the heart of the crisis involves two specific ethical boundaries Anthropic refused to cross — allowing its AI to be used in autonomous weapons systems that can kill without human decision-making, and permitting mass domestic surveillance. Pentagon officials, who wanted broader access to AI capabilities, viewed these restrictions as unacceptable obstacles.
Trump escalated the matter dramatically, publicly condemning Anthropic on Truth Social and ordering every federal agency to stop using Claude immediately. He framed Anthropic’s ethical stance as a politically motivated attempt to obstruct the military, calling the company’s leadership “leftwing nut jobs.”
Sam Altman sought to reassure both the government and his own employees, claiming that OpenAI’s deal with the Pentagon includes explicit protections against the exact practices Anthropic had fought against. In an internal memo, he described autonomous weapons and mass surveillance as OpenAI’s “main red lines” and said the contract reflects those values.
The episode has put the entire AI industry on notice. Nearly 500 employees from both OpenAI and Google signed an open letter in solidarity with Anthropic, warning that the Pentagon was attempting to divide companies against each other. How that internal pressure shapes OpenAI’s next moves remains to be seen.