×
Marketers grapple with AI video safety as synthetic content explodes
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Marketers are grappling with a new brand safety challenge as AI-generated videos flood social media platforms, with cartoon characters like SpongeBob becoming unlikely symbols of this content proliferation. The rise of AI video creation tools, particularly following OpenAI’s Sora app launch, has forced advertisers to confront difficult decisions about where their ads appear alongside this rapidly expanding category of synthetic content.

The big picture: AI-generated video content exists on a spectrum from harmless entertainment to potentially problematic material, making it impossible for marketers to apply blanket policies across all synthetic content.

  • Unlike traditional “AI slop,” some AI-generated videos attract significant viewership and engagement, forcing brands to evaluate each type individually rather than avoiding all AI content.
  • The speed at which new AI video trends emerge means brand safety policies must be continuously updated, with content that seems safe today potentially becoming problematic tomorrow.

What marketers are doing: Companies are turning to specialized measurement and verification firms to help navigate this new landscape while building internal frameworks for decision-making.

  • Zefr, a digital video ad company, has developed tools to continuously track AI-generated material in ad campaigns, similar to traditional brand safety systems that flag risky content.
  • Channel Factory, a digital video optimization company, is helping marketers identify AI-generated videos through frame-level analysis, audio cues, and metadata patterns, then layering that classification into broader content analysis.
  • Most brands are currently in “watch-and-wait mode,” building policies and frameworks that will likely be implemented starting in Q1 2025.

Why this matters: This represents the evolution of brand safety concerns into the generative AI era, where the traditional risk-management approach must adapt to synthetic content’s unique challenges.

  • “This is going to end up being the next big brand safety problem,” said Andrew Serby, chief commercial officer at Zefr.
  • The issue will eventually expand beyond safety to brand suitability, testing how much “algorithmic weirdness” brands are willing to associate with.

The platform response: YouTube is taking a more restrictive approach compared to OpenAI, implementing policies to protect creators while managing ecosystem quality.

  • YouTube recently cracked down on low-effort AI-generated content and launched a likeness-detection system for creators in its partner program.
  • The platform’s approach contrasts with OpenAI’s Sora, which initially allowed users to generate videos of real people without consent, leading to problematic content featuring figures like Martin Luther King Jr.

Key challenges ahead: Inconsistent disclosure rules and rapidly improving AI video quality are making it increasingly difficult for marketers to identify and categorize synthetic content.

  • “Most of them are really just trying to understand what the future will look like with AI-generated video,” said Lindsey Gamble, creator economy expert and advisor.
  • The lack of clear disclosure standards means creators don’t always identify AI-generated content, creating additional opacity for advertisers trying to make informed decisions.

What they’re saying: Industry experts emphasize the need for human judgment alongside technological solutions.

  • “You can’t automate good judgment. You can use data to inform, but it’s essential to understand what content is good and is worthy of your brand association,” said Salazar Llewellyn, editorial director at ad agency DEPT.
  • “Whatever signals that content is emanating, from believability to virality to contention, it should be independent of the fact that it was generated by an AI,” noted Anudit Vikram, chief product officer at Channel Factory.

By the numbers: Recent industry data highlights the broader context of this challenge.

  • 36% of marketers say user-generated content is extremely important to their social media strategy, compared to just 2% who feel the same about AI content.
  • 61% of global TikTok users have made purchases via TikTok Shop, showing the commercial importance of content placement decisions.
Future of Marketing Briefing: Marketers confront a new kind of brand safety problem in AI video

Recent News

Law firm pays $55K after AI created fake legal citations

The lawyer initially denied using AI before withdrawing the fabricated filing.

AI experts predict human-level artificial intelligence by 2047

Half of experts fear extinction-level risks despite overall optimism about AI's future.

OpenAI acquires Sky to bring Mac control to ChatGPT

Natural language commands could replace clicks and taps across Mac applications entirely.