According to an exclusive report by Axios, the Pentagon is considering ending its partnership with Anthropic because this AI company insists on restrictions regarding military use of the Claude model, refusing to open up the areas of large-scale surveillance and fully autonomous weapons. Notably, OpenAI, Google, and xAI have all agreed to the Pentagon’s “all lawful purposes” clause, making Anthropic the only one among the four major AI labs to stand firm.
(Background recap: Elon Musk’s new big project “Lunar Base Alpha”: building an AI superfactory on the Moon, launching it toward the solar system with giant slingshots)
(Additional background: AI panic and unemployment! Microsoft executives warn that most white-collar workers will be replaced by automation within “the next 12-18 months”)
Table of Contents
Citing a senior government official, Axios reports that the Pentagon is urging the four top AI labs (OpenAI, Google, xAI, Anthropic) to allow military use of their tools for “all lawful purposes,” covering the most sensitive areas such as weapons development, intelligence gathering, and battlefield operations.
This demand stems from the Department of Defense’s AI Strategy Memorandum released on January 9 this year. The document explicitly instructs that within 180 days, the “all lawful purposes” clause be incorporated into all AI procurement contracts, meaning that military use standards for AI will align with general force application standards, no longer requiring “meaningful human control.”
However, after months of difficult negotiations, Anthropic has yet to accept these terms, and the Pentagon is “getting tired of it.” The official stated:
Anything is possible, including reducing cooperation with Anthropic or even ending the partnership altogether. But if we believe that’s the right move, we must find suitable replacements for them.
In response to Pentagon pressure, Anthropic insists on two non-negotiable bottom lines:
The senior official admitted that there is considerable ambiguity about which use cases fall into these two prohibited areas and which do not. Negotiating each specific application case-by-case with the Pentagon, or facing Claude refusing certain applications in practice, is “not feasible.”
It’s worth noting that last summer, Anthropic signed a two-year prototype contract with a cap of $200 million, making Claude the first commercial AI model authorized to operate on the Pentagon’s classified networks, used for weapon testing and real-time combat communications among the most sensitive tasks.
The trigger for this contractual crisis can be traced back to an earlier incident this month. According to Axios on February 13, during a U.S. military raid to arrest Venezuelan President Nicolás Maduro, Claude was deployed via Palantir’s platform to process real-time intelligence data.
The report states that Anthropic executives, upon learning of this, proactively contacted Palantir to inquire whether Claude was used in the raid, “implying they might disapprove of such use, as the operation involved kinetic strikes (non-explosive penetrating weapons).”
Claude denied involvement, causing serious concern within the Pentagon.
An Anthropic spokesperson outright denied that the company had discussed such military operations with the Department of Defense.
In stark contrast, the other three AI labs have shown greater flexibility in responding to the Pentagon’s demands:
It is understood that at least one of these three has fully accepted the “all lawful purposes” clause, while the other two have shown much greater flexibility than Anthropic. This positions Anthropic as the only player remaining committed to safety boundaries in this military AI arms race, but also at risk of being marginalized.
The ultimate extension of AI products is to serve the nation, and military applications are an unavoidable sharp edge.