Anthropic Chief Executive Amodei internal memo leaks, directly accusing OpenAI CEO Altman of being “a complete lie,” as the two AI giants clash over military contracts.
(Background: Is Sam Altman despicable? After supporting Anthropic, recently blacklisted by Pentagon, he now calls on OpenAI to secure US Department of Defense contracts)
(Additional context: The Wall Street Journal reports: Trump’s targeting of Iran’s Khamenei relies on Claude AI positioning, with OpenAI taking full control of Pentagon systems)
Table of Contents
Toggle
One
A leaked internal memo from Anthropic CEO Dario Amodei to staff directly criticizes competitor OpenAI CEO Sam Altman, calling his statements “a complete lie,” and dismisses the latest Pentagon deal as “security show.”
The two most influential AI companies globally are tearing into each other over a matter that could determine the future direction of AI for the next decade.
The trigger for this clash stems from Anthropic’s original $200 million military contract. Through a partnership with Palantir, Anthropic’s Claude AI has been deployed on classified military networks.
However, in late February, the situation rapidly escalated. The Pentagon issued a final ultimatum to Anthropic: Remove all AI usage restrictions, allowing “any lawful purpose” unrestricted access, or face contract termination and blacklisting by February 27.
CEO Amodei publicly refused, stating he “cannot in good conscience” accept these terms, and drew two red lines:
Further reading: Trump aims to completely ban Anthropic! Refuses to modify Claude’s “killing” restrictions
The retaliation was swift and fierce. Hours after the refusal, the Trump administration listed the company as a “supply chain risk” (a label usually used for foreign adversaries), effectively banning it from all federal contracts, and branding it as “radical leftist, woke, and a national security threat.”
Just hours after Anthropic’s blacklisting on February 28, Altman announced that OpenAI had reached an agreement with the Department of Defense. In an official blog post, OpenAI stated the contract includes the same “red line” protections: restrictions on autonomous weapons, domestic mass surveillance, and key automation decisions.
But the devil is in the details. OpenAI’s contract allows “all lawful purposes,” unlike Anthropic’s explicit bans. OpenAI explained: “In our interactions, the Department of Defense clearly stated that large-scale domestic surveillance is illegal and has no plans for it.”
Critics immediately pointed out the problem: laws change. Actions deemed illegal today may become permissible tomorrow, making the “lawful purpose” clause in the contract inherently fragile.
In the leaked memo, Amodei offers a blunt assessment of the public relations war:
I believe attempts to manipulate public opinion are ineffective; most see OpenAI’s dealings with the Department of Defense as suspicious, and view us as heroes.
He also directly criticizes Altman’s motives:
The main reason they accept and we reject is that they care about appeasing employees, while we truly care about preventing misuse.
According to TechCrunch, Amodei further accuses Altman of “posing as a peacemaker and dealmaker.” Facing overwhelming criticism, Altman admitted at an all-hands meeting that this decision could have severe brand consequences, but defended it as a complex yet correct choice for national security.
As the controversy unfolds, users are actively voting with their actions. Recently, ChatGPT downloads on OpenAI surged; meanwhile, Claude app downloads from Anthropic increased significantly.
Anthropic chose to refuse and bear the consequences—losing federal contracts and government relations; OpenAI chose to cooperate with restrictions—risking user trust and brand reputation. Both choices are logical but come with costs.
What’s truly concerning is that this debate exposes a deeper issue: in an era of rapid militarization of AI, the gap between “legal” and “right” is widening.