BlockBeats News, March 2 — OpenAI CEO Sam Altman responded to community questions on X yesterday regarding the signing of a contract with the U.S. Department of Defense. The original post received over 6.6 million views and more than 7,500 replies. When asked why the deal was made so hastily, Altman explained that OpenAI had only been negotiating non-classified collaborations with the U.S. Department of Defense for several months and had previously declined contracts in classified areas (which were later taken over by Anthropic). However, after Anthropic was banned, the U.S. Department of Defense suddenly accelerated efforts in classified deployments. The reason for OpenAI’s rushed signing was to “de-escalate the situation,” and they have negotiated to ensure that similar terms will be available to all other AI labs.
When asked why OpenAI did not speak out for Anthropic, Altman said he considers Anthropic a “supply chain risk” and that it is “very bad for the industry, the country, and Anthropic.” He added, “This is a very bad decision by the U.S. Department of Defense, and I hope they withdraw.” He also mentioned that Anthropic “seems more concerned with specific contractual prohibitions rather than referencing current laws, possibly wanting more operational control than we do.”
Regarding OpenAI’s red lines, Altman stated, “If asked to do something unconstitutional or illegal, we will withdraw. Come visit me in prison.” When discussing overseas surveillance, Altman admitted he “dislikes” U.S. military surveillance of foreigners, emphasizing that his top AI principle is “democratization.” Surveillance may run counter to this principle, but “I don’t think it’s my place to decide.”
In his closing remarks, Altman raised a question “hidden behind many inquiries” but never directly asked: what if the U.S. government attempts to nationalize OpenAI or other AI projects? He said he has “long believed that building AGI might should be a government project.”