Social media platform X, formerly known as Twitter, has announced a significant policy shift aimed at curbing the spread of synthetic media during geopolitical crises by stripping monetization rights from creators who post undisclosed AI-generated videos of armed conflict. The move represents one of the platform’s most direct attempts to decouple financial incentives from the production of "engagement-bait" misinformation, which has frequently surged during periods of international instability. According to an announcement from Nikita Bier, X’s head of product, the company will now suspend creators from its Creator Revenue Sharing program for a minimum of 90 days if they are found to have shared AI-generated depictions of war without a clear and prominent disclosure. Repeat offenders will face a permanent ban from the monetization program, signaling a zero-tolerance approach to what the company describes as the "manipulation" of its revenue systems.
The decision arrives at a critical juncture for the platform, which has faced mounting criticism from international regulators and misinformation experts regarding its content moderation efficacy. By targeting the financial motivations of content creators, X is attempting to address the root cause of "engagement farming"—a practice where users post sensational, often fabricated, content to drive views and, consequently, ad revenue. Bier emphasized that during times of war, the demand for authentic, "on-the-ground" information is paramount, and the ease with which modern generative AI can produce convincing yet fraudulent footage poses a direct threat to public safety and global understanding of sensitive events.
The Mechanics of the New Enforcement Policy
The updated Creator Revenue Sharing policies are designed to act as both a deterrent and a corrective measure. Under the new guidelines, any creator who publishes a video generated or significantly altered by artificial intelligence that depicts military engagements, missile strikes, or civilian casualties must include a visible label. Failure to do so triggers an immediate investigation and potential suspension. The 90-day suspension period is intended to serve as a cooling-off phase, removing the profit motive for accounts that prioritize viral growth over factual integrity.
Enforcement of this policy will rely on a multi-layered detection strategy. X will utilize internal metadata analysis to identify files created by known generative AI tools. Additionally, the platform will lean heavily on "Community Notes," its crowdsourced fact-checking feature. If a post is flagged by the community and a consensus is reached that the media is AI-generated, that signal will be fed into the enforcement engine. Bier noted that the company is refining its technical signals to distinguish between stylistic edits and deceptive synthetic media, ensuring that legitimate creative expression is not inadvertently penalized while focusing strictly on content that could mislead the public regarding active conflicts.
Case Studies in Synthetic Deception: From Dubai to Kyiv
The necessity of this policy was underscored by a series of high-profile incidents involving viral AI footage. Recently, a hyper-realistic video depicting a missile strike on the Burj Khalifa in Dubai circulated widely on X, garnering over 8 million views in a matter of hours. Despite being entirely fabricated, the clip caused significant alarm and was reshared across multiple platforms, including Instagram, where variations of the footage reached tens of thousands of additional users. The incident occurred against a backdrop of heightened tensions in the Middle East following a series of retaliatory strikes involving the United States, Israel, and Iran, demonstrating how easily synthetic media can exacerbate an already volatile information environment.
This is not the first time conflict-related deepfakes have threatened to disrupt the geopolitical narrative. In the early stages of the Russian invasion of Ukraine in 2022, a deepfake video of Ukrainian President Volodymyr Zelensky appeared online, in which he seemingly urged his troops to lay down their arms and surrender. Although the video was of relatively low quality compared to today’s generative standards, it required immediate intervention from Ukrainian state officials and international intelligence agencies to debunk. As AI technology has transitioned from specialized labs to "trivial" consumer tools, the risk of high-fidelity deception has scaled exponentially, leading to the current crisis of authenticity that X is now attempting to manage.
Chronology of AI Misinformation and Platform Response
The evolution of synthetic media on social platforms has followed a rapid and concerning trajectory over the last twenty-four months:
- February 2022: The Zelensky surrender deepfake marks the first major use of AI video as a psychological operation in a high-intensity modern war.
- Early 2023: The release of advanced generative models like Midjourney and Sora-adjacent technologies makes the creation of photorealistic "war" imagery accessible to the general public.
- October 2023: Following the outbreak of the Israel-Hamas conflict, X and other platforms are flooded with AI-generated images of "crying children" and "devastated neighborhoods," many of which were used to solicit fraudulent donations.
- April 2024: Heightened tensions between Iran and Israel lead to a surge in AI-generated "missile launch" videos, many of which use footage from video games or older conflicts, enhanced by AI to appear current.
- Late 2024: X formalizes the link between "misinformation" and "demonetization," shifting away from pure content removal toward economic penalties.
Supporting Data and the Rise of the "Liar’s Dividend"
Data from cybersecurity firms and misinformation watchdogs suggest that the volume of deepfake content online is doubling every six months. A report by DeepMedia recently estimated that the number of deepfakes detected across social media platforms in 2023 increased by nearly 900% compared to the previous year. This proliferation has led to a phenomenon known as the "Liar’s Dividend," where the mere existence of convincing AI allows bad actors to dismiss genuine footage of war crimes or military movements as "fake" or "AI-generated," further muddying the waters of international accountability.
The United Nations has issued several warnings regarding this trend. In a series of briefs on information integrity, the UN noted that deepfakes in conflict zones do more than just spread lies; they incite hate speech and can provoke real-world violence by creating "proof" of atrocities that never occurred. By tying enforcement to the Creator Revenue Sharing program, X is specifically targeting the top 1% of high-engagement accounts that drive the majority of the platform’s traffic, acknowledging that these "super-spreaders" are often motivated more by the platform’s payout structure than by ideological goals.
Reactions from Industry Experts and Regulatory Bodies
The move has drawn a mixed but largely attentive response from the tech community and regulators. Proponents of the policy argue that financial sanctions are the only effective way to police a platform of X’s scale. "Content moderation is an arms race that the platforms are losing," said one digital forensics analyst. "By removing the profit, you change the cost-benefit analysis for the people who run these ‘outrage factories.’ It’s a pragmatic move."
However, some critics point to the reliance on Community Notes as a potential weakness. While the crowdsourced system has been praised for its accuracy in many cases, it is not immune to coordinated manipulation by state-sponsored actors or partisan groups. There are concerns that rival factions could "weaponize" the reporting system to get legitimate creators demonetized during sensitive political events. X has countered these concerns by stating that the revenue suspension will only occur after a multi-signal verification process that includes both automated and human-assisted reviews.
From a regulatory perspective, the European Union has been watching X closely under the Digital Services Act (DSA). The DSA mandates that "Very Large Online Platforms" (VLOPs) take proactive steps to mitigate systemic risks, including the spread of disinformation. X’s new policy could be seen as an effort to align with these international standards and avoid the massive fines—up to 6% of global turnover—that the EU can impose for non-compliance.
Broader Implications for the Future of Digital Truth
X’s decision to penalize undisclosed AI content marks a pivot in the philosophy of social media management. For years, the debate centered on whether platforms should be "arbiters of truth" or "neutral town squares." By focusing on the disclosure of the tool used rather than the intent of the creator, X is attempting to find a middle ground: allowing synthetic creativity to exist while mandating a "nutrition label" for digital media.
The long-term implications of this policy will likely influence how other platforms, such as Meta’s Threads or ByteDance’s TikTok, handle the intersection of AI and monetization. If X successfully reduces the volume of fake war footage by hitting creators’ bank accounts, it could provide a blueprint for a more sustainable model of digital trust. Conversely, if the enforcement proves inconsistent, it may further alienate users who are already struggling to distinguish between the fog of war and the fog of the algorithm.
As generative AI continues to evolve, the distinction between "captured" and "created" content will only become more blurred. X’s head of product, Nikita Bier, concluded his announcement by stating that the platform will "continue to refine our policies and product to ensure X can be trusted during these critical moments." For a platform that has often been at the center of controversy regarding its commitment to safety, this policy represents a high-stakes bet that transparency and economic consequences can preserve the integrity of the global timeline.








