Led by Dario Amodei, Anthropic has long been considered one of the most proactive AI companies collaborating with the U.S. government. Now, it faces a public standoff with the Pentagon over two key usage restrictions.
Amodei emphasized that the company is willing to support 98% to 99% of defense applications but has drawn the line on “domestic mass surveillance” and “fully autonomous weapons.” He stated this is not a rejection of national security but a defense of American democratic values and constitutional principles.
Close cooperation with the defense system, but with two red lines
In an interview, Amodei highlighted that Anthropic was one of the earliest AI companies to deeply collaborate with U.S. national security agencies. He pointed out that the company was among the first to deploy models in classified cloud environments and to develop customized models for national security purposes, which are now widely used by intelligence agencies and the military, including in cybersecurity and operational support.
In other words, Anthropic is not refusing military use but actively participating.
However, the company has clearly set two non-negotiable restrictions:
First is “domestic mass surveillance.” Amodei worries that AI enables the government to analyze large amounts of data from private companies at an unprecedented scale, such as location records, political leanings, and personal behavior data. While these actions may not be illegal under current laws, the explosive development of AI technology has far exceeded the original legislative intent.
Second is “fully autonomous weapons,” meaning weapon systems that can decide to fire without any human involvement. Amodei pointed out that current AI systems still have unpredictability and reliability issues. If left entirely to machines, they could lead to misjudgments, misfires, or civilian casualties.
He emphasized that this is different from the “semi-autonomous weapons” used on the battlefield today; it refers to fully unmanned, autonomous weapon systems.
Three-day ultimatum and “supply chain risk” controversy
According to Amodei, the Pentagon demanded that Anthropic agree to its conditions within just three days, or it would be classified as a “supply chain risk.” Such designations are usually used against foreign competitors, such as Russian or Chinese companies, and are rarely applied to domestic U.S. firms.
More controversially, the communication was mainly through social media posts. Amodei said the company has not received any formal legal documents and only saw public statements from the President and Defense Department officials on X (formerly Twitter).
U.S. President Donald Trump even publicly criticized Anthropic as “selfish,” claiming this move jeopardizes U.S. military and national security.
In response, Amodei said the company is willing to assist the Department of Defense in smoothly transitioning to other suppliers even if sanctions are imposed, preventing a 6 to 12-month setback due to technical disruptions.
What is the real core issue?
Amodei believes the controversy is not about “patriotism” but about technological maturity and accountability.
He pointed out that AI still has unpredictability. Even if overall performance is excellent, a 1% error at a critical moment could be catastrophic for military operations—misidentifying enemies or allies, causing civilian casualties, or friendly fire incidents.
Deeper issues concern “accountability.” If, in the future, a network of millions of drones operated by a few or even a single commander makes mistakes, who is responsible? AI? Engineers? Military officers? Politicians?
These questions have not yet been fully discussed in Congress.
Does private enterprise override government?
The most pointed question in the interview was: why does a private company have the authority to decide how the military can use its technology?
Amodei’s answer was straightforward—free market.
He pointed out that the government can choose other suppliers. If values do not align, both sides should part ways amicably, rather than punishing through supply chain risk labels. He believes such practices create a chilling effect on private companies.
At the same time, he acknowledged that in the long run, boundaries should not be drawn between private companies and the military; instead, Congress should legislate clear regulations. He called for new frameworks to be established for AI in surveillance and autonomous weapons, ensuring laws keep pace with technological advances.
Is this about ideology?
Critics have labeled Anthropic as a “left-leaning woke company.” Amodei denied this, emphasizing that the company has collaborated with the government on issues like energy policy and AI initiatives. He stated that this disagreement is unrelated to political stance but concerns values and risk management.
He stressed, “Disagreeing with the government is the most American thing to do.”
Can Anthropic survive this storm?
On the business front, Amodei appears quite confident. He said the impact of the supply chain risk designation is limited and will not ban other companies from using Anthropic’s technology altogether, only affecting military contracts.
He believes some rhetoric is exaggerated to create fear, uncertainty, and doubt (FUD), but the company’s core operations will remain stable.
This conflict actually reveals a bigger issue: as AI technology advances exponentially while legislation and regulation lag behind, who will set the boundaries?
Anthropic advocates “99% cooperation, 1% caution,” while the Pentagon emphasizes “all legal uses should be open.” The gap reflects not only policy differences but also contrasting visions of future warfare.
This article about the direct clash between Anthropic and the Pentagon first appeared on Chain News ABMedia.