The Pentagon confronts Anthropic! Fully opens up Claude for military use "or else termination and departure"

動區BlockTempo

According to an exclusive report by Axios, the Pentagon is considering ending its partnership with Anthropic because this AI company insists on restrictions regarding military use of the Claude model, refusing to open up the areas of large-scale surveillance and fully autonomous weapons. Notably, OpenAI, Google, and xAI have all agreed to the Pentagon’s “all lawful purposes” clause, making Anthropic the only one among the four major AI labs to stand firm.

(Background recap: Elon Musk’s new big project “Lunar Base Alpha”: building an AI superfactory on the Moon, launching it toward the solar system with giant slingshots)

(Additional background: AI panic and unemployment! Microsoft executives warn that most white-collar workers will be replaced by automation within “the next 12-18 months”)

Table of Contents

  • Anthropic’s two red lines: ban on large-scale surveillance, ban on autonomous weapons
  • Maduro’s raid triggers the crisis
  • The other three giants “have no objections”

Citing a senior government official, Axios reports that the Pentagon is urging the four top AI labs (OpenAI, Google, xAI, Anthropic) to allow military use of their tools for “all lawful purposes,” covering the most sensitive areas such as weapons development, intelligence gathering, and battlefield operations.

This demand stems from the Department of Defense’s AI Strategy Memorandum released on January 9 this year. The document explicitly instructs that within 180 days, the “all lawful purposes” clause be incorporated into all AI procurement contracts, meaning that military use standards for AI will align with general force application standards, no longer requiring “meaningful human control.”

However, after months of difficult negotiations, Anthropic has yet to accept these terms, and the Pentagon is “getting tired of it.” The official stated:

Anything is possible, including reducing cooperation with Anthropic or even ending the partnership altogether. But if we believe that’s the right move, we must find suitable replacements for them.

Anthropic’s Two Red Lines: Ban on Large-Scale Surveillance and Autonomous Weapons

In response to Pentagon pressure, Anthropic insists on two non-negotiable bottom lines:

  • No large-scale surveillance of U.S. citizens
  • No fully autonomous weapons systems (i.e., automated lethal weapons without human intervention)

The senior official admitted that there is considerable ambiguity about which use cases fall into these two prohibited areas and which do not. Negotiating each specific application case-by-case with the Pentagon, or facing Claude refusing certain applications in practice, is “not feasible.”

It’s worth noting that last summer, Anthropic signed a two-year prototype contract with a cap of $200 million, making Claude the first commercial AI model authorized to operate on the Pentagon’s classified networks, used for weapon testing and real-time combat communications among the most sensitive tasks.

Maduro’s Raid Sparks the Crisis

The trigger for this contractual crisis can be traced back to an earlier incident this month. According to Axios on February 13, during a U.S. military raid to arrest Venezuelan President Nicolás Maduro, Claude was deployed via Palantir’s platform to process real-time intelligence data.

The report states that Anthropic executives, upon learning of this, proactively contacted Palantir to inquire whether Claude was used in the raid, “implying they might disapprove of such use, as the operation involved kinetic strikes (non-explosive penetrating weapons).”

Claude denied involvement, causing serious concern within the Pentagon.

An Anthropic spokesperson outright denied that the company had discussed such military operations with the Department of Defense.

The Other Three Giants “Have No Objections”

In stark contrast, the other three AI labs have shown greater flexibility in responding to the Pentagon’s demands:

  • OpenAI (ChatGPT) has agreed to lift safety restrictions applicable to general users when serving the Pentagon
  • Google (Gemini) is actively cooperating
  • xAI (Grok) also demonstrates willingness to collaborate

It is understood that at least one of these three has fully accepted the “all lawful purposes” clause, while the other two have shown much greater flexibility than Anthropic. This positions Anthropic as the only player remaining committed to safety boundaries in this military AI arms race, but also at risk of being marginalized.

The ultimate extension of AI products is to serve the nation, and military applications are an unavoidable sharp edge.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)