The Wall Street Journal reports: Trump targeted Iran's Khamenei using Claude AI, with OpenAI taking full control of Pentagon systems

動區BlockTempo

According to The Wall Street Journal, the U.S. Central Command (CENTCOM) used Anthropic’s Claude AI system during the Iran airstrike operations, providing intelligence analysis, target identification, and battlefield simulations—just hours after Trump signed an executive order banning Anthropic. This incident highlights that AI has become deeply embedded in defense infrastructure, making it difficult to cut off even with presidential bans. Anthropic was expelled from Pentagon contracts for refusing to lift restrictions on autonomous weapons and mass surveillance, and OpenAI quickly stepped in.

(Background: Trump plans to ban Anthropic entirely! Refuses to modify Claude’s “kill switch,” while OpenAI surprisingly supports them.)

(Additional context: Pentagon confronts Anthropic! Fully opens Claude for military use—“or else, contract termination.”)

Table of Contents

Toggle

  • The ban is signed, but Claude is still “working” on the battlefield
  • Anthropic sticks to two red lines, suffers consequences for “disobedience”
  • OpenAI swiftly takes over, shifting the AI arms race to new players
  • It’s too late to turn back: technology’s momentum defies bans

Last Friday, when the Trump administration ordered a complete shutdown of Anthropic’s technology and Defense Secretary Mark Esper designated it as a “supply chain risk,” U.S. military operations in Iran continued to rely on Claude AI. This seemingly contradictory situation reveals an unsettling reality: AI’s infiltration into military systems has surpassed the reach of immediate executive orders.

The ban is signed, but Claude is still “working” on the battlefield

According to sources cited by The Wall Street Journal, during the Iran airstrike operation codenamed “Operation Epic Fury,” CENTCOM continued to use Claude for critical tasks—including intelligence analysis, target identification, and battlefield scenario simulations.

Anthropic signed a two-year prototype contract last summer with a cap of $200 million, in partnership with Palantir and Amazon Web Services. Claude became the first commercial AI model authorized to operate on Pentagon classified networks, used for weapon testing and real-time battlefield communication. Reports indicate the system also supported operations in capturing Venezuelan President Maduro earlier this year.

Anthropic sticks to two red lines, suffers consequences for “disobedience”

The core conflict lies in: the Pentagon demanding Anthropic remove usage restrictions, allowing Claude for “all lawful purposes”; but CEO Dario Amodei refused to compromise, insisting on two ethical red lines—

Claude must not be used for mass surveillance of U.S. citizens, nor to drive fully autonomous weapons systems.

Amodei stated that the company opposes AI being used for “mass domestic surveillance” and “fully autonomous weapons,” emphasizing that military decisions should remain under human control, not algorithmic judgment. In a statement, he said, “We cannot in good conscience agree to their demands.”

Defense Secretary Pete Hegseth immediately listed Anthropic as a “supply chain risk,” and Trump directly ordered all federal agencies to “immediately cease using” Anthropic technology—though the Department of Defense and other key agencies have a six-month transition period. Anthropic announced it will challenge this designation in court, claiming it is “without legal basis,” and warned that such actions set a dangerous precedent for “any American company negotiating with the government.”

OpenAI swiftly takes over, shifting the AI arms race to new players

Just hours after Trump announced the ban on Anthropic, OpenAI announced a deal with the Department of Defense to deploy its AI technology on classified military networks.

Notably, during the controversy, OpenAI CEO Sam Altman publicly supported Anthropic in a CNBC interview, calling it “trustworthy in security.” Despite this, OpenAI ultimately took over the military contract that Anthropic was forced to relinquish. This subtle interaction among AI giants reflects the difficult balancing act in Silicon Valley between commercial interests and ethical considerations.

It’s too late to turn back: technology’s momentum defies bans

What’s most thought-provoking about this incident isn’t just the political tug-of-war between Trump and Anthropic, but a deeper reality: once AI systems are deeply embedded in every military aspect—from intelligence analysis to target engagement—administrative orders to “pull the plug” cannot be implemented quickly.

For the crypto and Web3 communities, this case offers a warning: whether AI or blockchain, when technology enters the core of government and defense systems, the ideal of decentralization must face the reality of “state will.” Anthropic’s experience shows that upholding ethical standards in technology may come at the cost of losing major clients.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)