Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
✨Pentagon Officially Designates Anthropic as a Supply Chain Risk: Dario Amodei Announces He Will Go to Court and Emphasizes He Will Continue National Security Support
The US Department of Defense (Pentagon) has officially labeled the artificial intelligence startup Anthropic as a "supply chain risk." This decision brings to a head the months-long dispute over the security restrictions the company has placed on its Claude AI model. In a written statement dated March 5, 2026, the department said, "Anthropic leadership has been formally notified that the company and its products are assessed as a supply chain risk; the decision is effective immediately." This is a tool historically used against foreign companies (especially those with Chinese connections) that operate against the US, and it is the first time it has been applied to an American company. The Pentagon's justification is that Anthropic prevents the use of Claude AI "for any legitimate purpose." Defense Department officials stated, “The fundamental principle is that the military should be able to use technologies for all legitimate purposes. A vendor will not be allowed to interfere with the chain of command and put combatants at risk.”
✨The disagreement stems from Anthropic’s two critical red lines: The company rejects the use of Claude in mass domestic surveillance of American citizens and fully autonomous unmanned weapon systems. CEO Dario Amodei had previously explicitly communicated these restrictions to Defense Secretary Pete Hegseth. The Pentagon argues that such restrictions jeopardize national security.
✨Anthropic CEO Dario Amodei confirmed the Pentagon’s decision. In the company’s official statement dated March 5, 2026, Amodei stated that they received the letter from the Department of War (formerly the Pentagon) on March 4 and found the decision “legally unfounded.” Amodei stated, “We have no choice but to fight this action in court,” and confirmed that they will be filing a legal challenge, as previously stated. However, Amodei also emphasized that they will continue to support national security operations: “To avoid disrupting our fighters, we will continue to provide our models at nominal cost and with engineering support for as long as permitted.” The company reminded that it already actively works with the Pentagon in areas such as intelligence analysis, modeling, operational planning, and cyber operations. Amodei also apologized for the tone of the internal memo leaked six days ago, stating that the text is outdated and does not reflect his careful consideration. The decision comes after the Trump administration ordered federal agencies to halt the use of Anthropic products. The process, initiated by Defense Secretary Pete Hegseth’s statement on X, may now require all military contractors to sever commercial ties with Anthropic. Experts say this move could shift the balance of power between Silicon Valley and the government and cool down innovation. Anthropic argues that supply chain risk labeling should be the “least restrictive method” under 10 USC 3252, stating that the ruling is narrow in scope and will only affect direct contracts. The company remains committed to continuing its contributions to national security alongside the court proceedings. This development once again highlights the tension between AI security policies and defense needs; the process appears set to continue in the courtrooms in the coming days.
The dispute stems from a contract worth approximately $200 million that Anthropic signed to use its Claude AI model in classified systems for the US military.
Anthropic set two key "red lines":
- That the AI not be used for mass surveillance of American citizens.
- That it not be used for **fully autonomous weapon systems** (weapons that make lethal decisions without human oversight)
The Pentagon, however, demanded unlimited use of the AI for "all legitimate purposes" and did not accept these restrictions. Defense Secretary Pete Hegseth forced the company to comply by a deadline of Friday evening (February 26, 2026).
When no agreement was reached:
- President Trump ordered all federal agencies to **immediately halt** Anthropic technology (giving the Pentagon a 6-month transition period).
- Hegseth declared Anthropic a "supply chain risk to national security"—a sanction normally used against foreign threats; it also prohibits military contractors from doing business with the company.
Anthropic called the decision "legally invalid and precedent-setting" and announced it would take the matter to court. CEO Dario Amodei emphasized that he would not back down from his position.
Ultimately, the Pentagon signed a new agreement with OpenAI accepting similar restrictions. This event marked a major turning point regarding who should set limits on the military use of AI—companies or government?
In short: What began as a discussion of security concerns escalated into political pressure and sanctions. The conflict between AI ethics and national security continues.
#TrumpordersfederalbanonAnthropicAI