Anthropic CEO Amodei Leaks Internal Memo, Directly Calling OpenAI CEO Altman’s Statements “Complete Lies,” as the Two AI Giants Clash Over Military Contracts.
(Background: Is Sam Altman Despised? After being blacklisted by the Pentagon, Anthropic shifts to support OpenAI securing U.S. Defense Department contracts.)
(Additional context: The Wall Street Journal reports: Trump uses Claude AI to target Iran’s Khamenei, with OpenAI taking full control of Pentagon systems.)
Table of Contents
Toggle
One
An internal memo written by Anthropic CEO Dario Amodei to employees has leaked, in which he directly accuses OpenAI CEO Sam Altman of “complete lies,” and dismisses the latest Pentagon deal as “security theater.”
The two most influential AI companies globally are now openly at odds over a matter that could determine the future direction of AI for the next decade.
The trigger for this clash stems from Anthropic’s original $200 million military contract. Through a partnership with Palantir, Anthropic’s Claude AI has been deployed on classified military networks.
However, in late February, the situation rapidly escalated. The Pentagon issued a final warning to Anthropic: Remove all AI usage restrictions and allow “any lawful purpose” unrestricted access, or face contract termination and blacklisting by February 27.
CEO Amodei publicly refused, stating they “cannot in good conscience” accept these terms, and drew two red lines they would not cross:
Related reading: Trump plans to ban Anthropic entirely! Refuses to modify Claude’s “kill switch.”
The retaliation was swift and fierce. Within hours of refusing, the Trump administration blacklisted the company as a “supply chain risk” (a label usually used for foreign adversaries), effectively barring it from all federal contracts, and labeled the company as “radical leftists, woke activists, and national security threats.”
Just hours after Anthropic’s blacklisting on February 28, Altman announced that OpenAI had reached an agreement with the Department of War. In an official blog post, OpenAI stated that the contract includes the same “red line” protections as Anthropic: restrictions on autonomous weapons, large-scale domestic surveillance, and key automation decisions.
But the devil is in the details. OpenAI’s contract allows “all lawful purposes,” unlike Anthropic’s explicit bans. OpenAI explained: “In our interactions, the Department of War clearly stated that large-scale domestic surveillance is illegal and not planned.”
Critics immediately pointed out the problem: laws change. Actions deemed illegal today may become permissible tomorrow, making the “lawful purpose” clause in the contract inherently fragile.
In the leaked memo, Amodei offers a blunt assessment of the public relations battle:
I believe this kind of manipulation in public and media is ineffective; most people see OpenAI’s dealings with the Department of War as suspicious, and us as the heroes.
He also directly criticizes Altman’s motives:
The main reason they accept and we refuse is that they care about appeasing employees, while we truly care about preventing misuse.
According to TechCrunch, Amodei further accuses Altman of “posing as a peacemaker and dealmaker.” Facing overwhelming criticism, Altman admitted at an all-hands meeting that the decision would have serious brand consequences, but defended it as a complex yet correct choice for national security.
As the controversy unfolds, users are actively voting with their actions. Recently, OpenAI’s ChatGPT downloads surged; meanwhile, Anthropic’s Claude app downloads also spiked significantly.
Anthropic chose to refuse and bear the consequences—losing federal contracts and government relations; OpenAI chose to cooperate with restrictions—at the cost of user trust and brand reputation. Both paths have their logic, but also their costs.
What’s truly concerning is that this debate exposes a deeper issue: in an era of rapid militarization of AI, the gap between what is “legal” and what is “right” is widening.