According to The Wall Street Journal, the U.S. Central Command (CENTCOM) used Anthropic’s Claude AI system during the Iran airstrike operations, providing intelligence analysis, target identification, and battlefield simulations—just hours after Trump signed an executive order banning Anthropic. This incident highlights that AI has become deeply embedded in defense infrastructure, making it difficult to cut off even with presidential bans. Anthropic was expelled from Pentagon contracts for refusing to lift restrictions on autonomous weapons and mass surveillance, and OpenAI quickly stepped in.
(Background: Trump plans to ban Anthropic entirely! Refuses to modify Claude’s “kill switch,” while OpenAI surprisingly supports them.)
(Additional context: Pentagon confronts Anthropic! Fully opens Claude for military use—“or else, contract termination.”)
Table of Contents
Toggle
Last Friday, when the Trump administration ordered a complete shutdown of Anthropic’s technology and Defense Secretary Mark Esper designated it as a “supply chain risk,” U.S. military operations in Iran continued to rely on Claude AI. This seemingly contradictory situation reveals an unsettling reality: AI’s infiltration into military systems has surpassed the reach of immediate executive orders.
According to sources cited by The Wall Street Journal, during the Iran airstrike operation codenamed “Operation Epic Fury,” CENTCOM continued to use Claude for critical tasks—including intelligence analysis, target identification, and battlefield scenario simulations.
Anthropic signed a two-year prototype contract last summer with a cap of $200 million, in partnership with Palantir and Amazon Web Services. Claude became the first commercial AI model authorized to operate on Pentagon classified networks, used for weapon testing and real-time battlefield communication. Reports indicate the system also supported operations in capturing Venezuelan President Maduro earlier this year.
The core conflict lies in: the Pentagon demanding Anthropic remove usage restrictions, allowing Claude for “all lawful purposes”; but CEO Dario Amodei refused to compromise, insisting on two ethical red lines—
Claude must not be used for mass surveillance of U.S. citizens, nor to drive fully autonomous weapons systems.
Amodei stated that the company opposes AI being used for “mass domestic surveillance” and “fully autonomous weapons,” emphasizing that military decisions should remain under human control, not algorithmic judgment. In a statement, he said, “We cannot in good conscience agree to their demands.”
Defense Secretary Pete Hegseth immediately listed Anthropic as a “supply chain risk,” and Trump directly ordered all federal agencies to “immediately cease using” Anthropic technology—though the Department of Defense and other key agencies have a six-month transition period. Anthropic announced it will challenge this designation in court, claiming it is “without legal basis,” and warned that such actions set a dangerous precedent for “any American company negotiating with the government.”
Just hours after Trump announced the ban on Anthropic, OpenAI announced a deal with the Department of Defense to deploy its AI technology on classified military networks.
Notably, during the controversy, OpenAI CEO Sam Altman publicly supported Anthropic in a CNBC interview, calling it “trustworthy in security.” Despite this, OpenAI ultimately took over the military contract that Anthropic was forced to relinquish. This subtle interaction among AI giants reflects the difficult balancing act in Silicon Valley between commercial interests and ethical considerations.
What’s most thought-provoking about this incident isn’t just the political tug-of-war between Trump and Anthropic, but a deeper reality: once AI systems are deeply embedded in every military aspect—from intelligence analysis to target engagement—administrative orders to “pull the plug” cannot be implemented quickly.
For the crypto and Web3 communities, this case offers a warning: whether AI or blockchain, when technology enters the core of government and defense systems, the ideal of decentralization must face the reality of “state will.” Anthropic’s experience shows that upholding ethical standards in technology may come at the cost of losing major clients.