Anthropic defies Pentagon ultimatum on use of AI tech: "These threats do not change our position"
San Francisco, California - AI giant Anthropic said Thursday it would not give the Defense Department unrestricted use of its technology despite being pressured to comply by the Pentagon.
"These threats do not change our position: we cannot in good conscience accede to their request," Anthropic chief executive Dario Amodei said in a statement.
Defense Secretary Pete Hegseth had given the company until Friday to agree to unconditional military use of its technology, even if it violates ethical standards, or face being forced to comply under emergency federal powers.
Amodei said Anthropic models have been deployed by the Pentagon and intelligence agencies to defend the country but that it draws an ethical line regarding its use for mass surveillance of US citizens and fully-autonomous weapons.
"Using these systems for mass domestic surveillance is incompatible with democratic values," Amodei said.
And leading AI systems are not yet reliable to be trusted to power deadly weapons without a human in ultimate control, he added.
"We will not knowingly provide a product that puts America's warfighters and civilians at risk."
Trump administration threatens use of Cold War-era law
After meeting with Anthropic early this week, Hegseth delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 PM Friday or face being forced to comply under the Defense Production Act.
The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs.
The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company's ability to work with the US government and reputation.
A senior Pentagon official at the time pushed back on the company's concerns, insisting the Defense Department had always operated within the law.
"Legality is the Pentagon's responsibility as the end user," the official said, adding that the department "has only given out lawful orders."
Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute.
Other AI giants fall in line
The Pentagon confirmed that Elon Musk's Grok system had been cleared for use in a classified setting despite its frequent scandal, while other contracted companies – OpenAI and Google – were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line.
Former OpenAI employees founded Anthropic in 2021 on the premise that AI development should prioritize safety.
"Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei said.
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
Cover photo: Collage: REUTERS

