Chinese officials treat ChatGPT as a journal, accidentally exposing secret operations involving cross-border suppression and defamation of Japanese Prime Minister Sanae Takaichi. An OpenAI report indicates that the individual also used it to gather U.S. political and economic information and inquire about face-swapping technology, highlighting the threat of information warfare in the AI era.
If you tell AI everything, you might leak national secrets? A case of cross-border suppression and defamation of Japanese Prime Minister Sanae Takaichi by Chinese authorities was accidentally exposed when a Chinese law enforcement officer used ChatGPT.
According to the latest OpenAI report, this Chinese law enforcement officer used ChatGPT as a journal to record secret operations related to suppression. His ChatGPT logs show that Chinese operatives once disguised themselves as U.S. Immigration officials to warn Chinese dissidents living in the U.S. that their public statements had broken the law.
In another case, he described attempting to use forged U.S. district court documents to request the removal of dissidents’ social media accounts.
Additionally, this Chinese law enforcement officer asked ChatGPT to generate a multi-stage plan, aiming to incite online anger over U.S. tariffs on Japanese goods to smear the then-upcoming Japanese Prime Minister Sanae Takaichi.
According to OpenAI, ChatGPT refused to respond to this request at the time. However, when Takaichi took office in late October, attack tags criticizing her and complaining about U.S. tariffs still appeared on a popular forum favored by Japanese content creators.
Image source: Flickr, Focal Foto
OpenAI states in the report that when asked about U.S. entities, ChatGPT provided publicly available information about U.S. federal government offices, the distribution of federal employees across states, and sources such as U.S. economic and financial industry forums and job sites.
Subsequently, Chinese law enforcement personnel sent English emails to U.S. state government officials and business and financial policy analysts, inviting them to participate in paid consultations and provide strategic advice for their clients.
These emails often sought to shift conversations to other video conferencing platforms like WhatsApp, Zoom, or Teams. One account even uploaded hardware specifications, requesting non-technical step-by-step instructions for installing face-swapping software FaceFusion.
As OpenAI releases its report, it coincides with a critical moment in the U.S.-China race for AI dominance. How this technology is used on the battlefield and within the boards of the world’s top two economies will be key to future developments.
The U.S. Department of Defense is currently at an impasse with Anthropic, the developer of Claude, over the use of their AI models.
Defense Secretary Pete Hegseth has issued a final ultimatum to Anthropic CEO Dario Amodei on Friday, demanding the removal of safety restrictions on the AI models, or risk losing lucrative Pentagon contracts.
Former Pentagon official Michael Horowitz, who focuses on emerging technologies, told CNN that the OpenAI report clearly shows China is actively using AI tools to strengthen cyber information warfare.
According to CyberScoop, during a media Q&A, OpenAI stated that it has not yet identified cases of threat actors using ChatGPT for automated cyberattacks, but added that several ongoing investigations are still unresolved.
In some cases, it is evident that ChatGPT is just one of several AI tools used by threat actors.
For example, in the case of Chinese law enforcement, reports uploaded to the model about information warfare mentioned the use of Chinese AI models like DeepSeek, suggesting that another model was likely used to prepare defamation operations against Sanae Takaichi.
Further reading:
Chinese hackers launch large-scale AI cyberattacks! Anthropic: AI hackers’ speed and scale have surpassed human hackers