Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic sues U.S. government over AI blacklist tied to military use dispute
Artificial intelligence developer Anthropic has filed a lawsuit against multiple U.S. government agencies, accusing the federal government of unlawfully blacklisting its technology after the company refused to allow certain military uses of its AI systems.
Summary
The complaint, filed in the U.S. District Court for the Northern District of California, seeks declaratory and injunctive relief against a broad group of federal entities and officials, including the Departments of War, Treasury, State, and Homeland Security, as well as the Federal Reserve and Securities and Exchange Commission.
Anthropic alleges the government retaliated against the company after it refused to permit its AI model family, known as Claude, to be used for lethal autonomous warfare or mass surveillance of Americans.
According to the complaint, tensions escalated after government officials demanded that Anthropic remove those restrictions and allow the Department of War to make “all lawful use” of the technology. The company said it agreed to expand cooperation but maintained its two key safety limitations.
The dispute culminated in a directive from Donald Trump, ordering federal agencies to immediately cease using Anthropic’s technology, followed by a decision from the Department of War to label the firm a “Supply-Chain Risk to National Security.”
The designation barred military contractors and partners from doing business with Anthropic, effectively cutting the company out of the defense supply chain. Several agencies subsequently halted contracts or instructed employees to stop using the company’s AI systems.
Anthropic argues these actions violate the First Amendment, the Administrative Procedure Act, and constitutional due-process protections. The company claims the measures were taken in retaliation for expressing concerns about the safety and reliability of AI systems used in autonomous weapons and mass surveillance.
The complaint states the government’s actions have already led to canceled contracts and could jeopardize hundreds of millions of dollars in near-term business, while also damaging the company’s reputation and commercial relationships.
Anthropic is asking the court to declare the government’s actions unlawful and block enforcement of the directives while the dispute is litigated.