As AI ethics clash fiercely with military demands, Anthropic CEO Dario Amodei has recently resumed negotiations with the U.S. Pentagon, aiming to save a multi-million dollar AI partnership contract while maintaining the company’s “red lines” and avoiding outright exclusion from the defense supply chain.
(Background: Anthropic CEO blasts OpenAI: “All Pentagon contracts are lies, Altman pretends to be a peace envoy”)
(Additional context: Trump plans to ban Anthropic entirely! Refuses to modify Claude’s “killing restrictions,” while unexpectedly supporting OpenAI)
Table of Contents
Toggle
American AI company Anthropic CEO Dario Amodei has recently restarted negotiations with the U.S. Department of Defense (Pentagon), making a final effort to negotiate terms for military use of its AI model Claude. This move comes after last week’s breakdown in talks and the Pentagon’s designation of Anthropic as a “supply chain risk,” aiming to prevent Anthropic from being excluded from military collaborations and to uphold its core ethical principles.
Since last year, Anthropic has signed a pilot contract worth up to $200 million with the Pentagon, making Claude the first advanced AI model authorized for deployment on classified networks. However, the Trump administration later demanded changes, insisting that AI be permitted for “all lawful uses” without restrictions.
Anthropic has maintained two “red lines”: prohibiting Claude from being used for large-scale domestic surveillance of U.S. citizens, and banning fully autonomous weapons systems—those that can select and attack targets without human intervention. The company believes these uses could threaten democratic values and that current AI technology is not yet reliable enough. However, Defense Secretary Pete Hegseth issued an ultimatum, demanding concessions within a deadline; otherwise, the contract would be terminated and enforced under relevant regulations, leading to a negotiation breakdown last Friday.
Sources reveal that Amodei is now directly engaging with Emil Michael, Deputy Undersecretary of Defense for Research and Engineering, to reach a compromise agreement that would allow the U.S. military to continue using Claude while significantly reducing the risk of Anthropic being officially blacklisted.
If successful, the new contract could ease tensions and influence the AI industry landscape— for example, OpenAI has also reached an agreement with the Pentagon but is adjusting its terms to include similar restrictions. Anthropic emphasizes its willingness to cooperate but refuses to abandon its core safeguards; if consensus cannot be reached, the company will assist in a smooth transition to other suppliers to avoid disrupting military operations.
This controversy highlights the tug-of-war between AI companies and the military over ethics, safety, and national security. Anthropic, branding itself as a “responsible AI” firm, refuses unconditional concessions, reflecting some Silicon Valley companies’ deep concerns about military AI applications; meanwhile, the Pentagon emphasizes battlefield flexibility and technological advantage.
If negotiations succeed, they could set a new precedent for military AI applications; if not, industry fragmentation may intensify. Experts anticipate that official statements or new developments may soon surface.