X has introduced a strict new policy targeting the spread of artificial intelligence-generated war misinformation by hitting creators where it hurts: their wallets.
Social media platform X has announced a major revision to its creator revenue-sharing policies, targeting artificial intelligence-generated videos of armed conflicts. The move comes amid the escalating Middle East war and growing concerns about misinformation during the conflict.
Nikita Bier, X’s head of product, said creators who post AI-generated war footage without disclosure will face a 90-day suspension from the revenue-sharing program. Repeat violations will result in a permanent ban. Enforcement will rely on Community Notes, the platform’s crowdsourced fact-checking system, as well as metadata and signals embedded in generative AI tools.
Bier emphasized that during times of war, it is critical that people have access to authentic information on the ground, warning that modern AI makes it trivial to create content that can mislead people.
The announcement comes as misinformation spreads rapidly online about the U.S. and Israel’s war with Iran, which is threatening to become a wider regional conflict. Social media platforms are under increasing pressure to prevent synthetic media from distorting public perception during crises.
Among the content that went viral is a high-production AI video simulating the destruction of the USS Abraham Lincoln aircraft carrier. The claims and video were dismissed by the U.S. Central Command. The cinematic-quality video shows hypersonic missiles overwhelming U.S. Aegis defenses. Another clip purported to show the CIA’s regional headquarters being leveled; however, fact-checkers reportedly traced this back to a 2015 residential fire in Sharjah, United Arab Emirates, that was digitally enhanced with AI to look like a military strike.
Other platforms like Youtube and Tiktok have introduced disclosure requirements for AI-generated content, but X’s approach is stricter because it ties compliance directly to monetization. X has leaned heavily on Community Notes as a decentralized fact-checking tool, and this new policy integrates that system into enforcement, effectively crowdsourcing the detection of undisclosed AI war content.