|  

Fake AI-Generated War Images and Videos Spread Rapidly on Social Media Despite Platform Policies

Mar 15, Kathmandu - In recent weeks, social media platforms, especially X, have seen a surge in the spread of fake images and videos related to war, created using artificial intelligence (AI). Despite the platform's strict policies aimed at curbing misinformation, the dissemination of misleading war-related content continues unabated.

Researchers report that since the onset of the Middle East conflict, an influx of AI-generated visual content has flooded social media. The volume of such synthetic materials surpasses what was seen in previous conflicts, making it increasingly difficult for users to distinguish between real and fabricated content.

To address the challenge of authentic information, X announced new policies last week. Under these rules, posting AI-created war images or videos without clearly indicating their synthetic nature can lead to a 90-day suspension from the platform’s revenue-sharing program. Repeated violations could result in permanent bans.

However, experts warn that these measures are insufficient to fully stop misinformation. According to Joe Bodnar, a researcher at a strategic communication institute, social media content remains heavily infiltrated with AI-generated war images and videos. Even some premium accounts have distributed misleading materials, including a fabricated video depicting Iran allegedly launching a nuclear-capable attack on Israel, which garnered far more views than the official policy message.

AFP’s global fact-checking network has identified numerous fake war-related AI materials circulating from Brazil to India. Some videos falsely depict explosions inside embassies, American soldiers, Iranian flags, and destroyed naval ships—all artificially generated. The blending of real and fake visuals complicates the work of fact-checkers trying to verify the authenticity of such content.

Additionally, social media’s own AI moderation systems have occasionally delivered misleading information to users. The platform’s revenue model, which benefits financially from engaging content, may inadvertently incentivize the spread of sensational and false materials. For example, a premium account posted a fabricated AI-generated video of Dubai’s Burj Khalifa, which was viewed nearly 2 million times despite lacking proper disclosure of its synthetic origin.

Earlier reports indicated that the platform had profited from promotional content linked to the Iranian government, leading to the removal of some verification badges from related accounts. Experts emphasize that while the platform’s announced policies are a step in the right direction, their effectiveness depends on proper implementation. Detecting AI-generated digital content remains challenging, and community-based fact-checking efforts are only somewhat effective.

A study from last year revealed that over 90% of community comments on social media are never published, highlighting the difficulty in moderating and controlling the spread of false information—especially during sensitive conflicts. Experts stress the urgent need for more robust measures to prevent the proliferation of AI-created fake war content online.