Anthropic CEO slams: OpenAI and Pentagon contracts are all lies, Altman is pretending to be a peace envoy

動區BlockTempo

Anthropic CEO Amodei Leaks Internal Memo, Directly Calling OpenAI CEO Altman’s Statements “Complete Lies,” as the Two AI Giants Clash Over Military Contracts.
(Background: Is Sam Altman Despised? After being blacklisted by the Pentagon, Anthropic shifts to support OpenAI securing U.S. Defense Department contracts.)
(Additional context: The Wall Street Journal reports: Trump uses Claude AI to target Iran’s Khamenei, with OpenAI taking full control of Pentagon systems.)

Table of Contents

Toggle

  • A Final Ultimatum Sparks Conflict
  • OpenAI’s “Pragmatic” Approach
  • The Reality Revealed by the Memo
  • Immediate User and Market Reactions

One

An internal memo written by Anthropic CEO Dario Amodei to employees has leaked, in which he directly accuses OpenAI CEO Sam Altman of “complete lies,” and dismisses the latest Pentagon deal as “security theater.”

The two most influential AI companies globally are now openly at odds over a matter that could determine the future direction of AI for the next decade.

A Final Ultimatum Sparks Conflict

The trigger for this clash stems from Anthropic’s original $200 million military contract. Through a partnership with Palantir, Anthropic’s Claude AI has been deployed on classified military networks.

However, in late February, the situation rapidly escalated. The Pentagon issued a final warning to Anthropic: Remove all AI usage restrictions and allow “any lawful purpose” unrestricted access, or face contract termination and blacklisting by February 27.

CEO Amodei publicly refused, stating they “cannot in good conscience” accept these terms, and drew two red lines they would not cross:

  • First, ban autonomous weapons systems: AI cannot make final targeting decisions on the battlefield.
  • Second, ban large-scale domestic surveillance: no building of mass surveillance tools for U.S. citizens.

Related reading: Trump plans to ban Anthropic entirely! Refuses to modify Claude’s “kill switch.”

The retaliation was swift and fierce. Within hours of refusing, the Trump administration blacklisted the company as a “supply chain risk” (a label usually used for foreign adversaries), effectively barring it from all federal contracts, and labeled the company as “radical leftists, woke activists, and national security threats.”

OpenAI’s “Pragmatic” Approach

Just hours after Anthropic’s blacklisting on February 28, Altman announced that OpenAI had reached an agreement with the Department of War. In an official blog post, OpenAI stated that the contract includes the same “red line” protections as Anthropic: restrictions on autonomous weapons, large-scale domestic surveillance, and key automation decisions.

But the devil is in the details. OpenAI’s contract allows “all lawful purposes,” unlike Anthropic’s explicit bans. OpenAI explained: “In our interactions, the Department of War clearly stated that large-scale domestic surveillance is illegal and not planned.”

Critics immediately pointed out the problem: laws change. Actions deemed illegal today may become permissible tomorrow, making the “lawful purpose” clause in the contract inherently fragile.

The Reality Revealed by the Memo

In the leaked memo, Amodei offers a blunt assessment of the public relations battle:

I believe this kind of manipulation in public and media is ineffective; most people see OpenAI’s dealings with the Department of War as suspicious, and us as the heroes.

He also directly criticizes Altman’s motives:

The main reason they accept and we refuse is that they care about appeasing employees, while we truly care about preventing misuse.

According to TechCrunch, Amodei further accuses Altman of “posing as a peacemaker and dealmaker.” Facing overwhelming criticism, Altman admitted at an all-hands meeting that the decision would have serious brand consequences, but defended it as a complex yet correct choice for national security.

Immediate User and Market Reactions

As the controversy unfolds, users are actively voting with their actions. Recently, OpenAI’s ChatGPT downloads surged; meanwhile, Anthropic’s Claude app downloads also spiked significantly.

Anthropic chose to refuse and bear the consequences—losing federal contracts and government relations; OpenAI chose to cooperate with restrictions—at the cost of user trust and brand reputation. Both paths have their logic, but also their costs.

What’s truly concerning is that this debate exposes a deeper issue: in an era of rapid militarization of AI, the gap between what is “legal” and what is “right” is widening.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)