×
IAB develops AI transparency guidelines as platforms show inconsistent standards
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The Interactive Advertising Bureau (IAB), a digital marketing trade organization, is developing comprehensive AI transparency and disclosure guidelines for advertising, aiming to establish industry-wide standards for when and how AI use should be disclosed to consumers. The initiative comes as marketers grapple with reputational risks from AI-generated content errors and the launch of OpenAI’s Sora app, which can create photorealistic videos from text prompts.

What you should know: The IAB’s approach focuses on meaningful disclosure rather than labeling everything touched by AI.

  • Caroline Giegerich, the IAB’s vice president of AI, emphasized that over-labeling risks desensitizing consumers and could “dilute trust instead of strengthening it.”
  • The guidelines aim to flag AI use only when it could genuinely mislead audiences, balancing transparency with practical implementation.
  • A working group of brands, agencies and platforms is establishing shared baseline principles for disclosure across the advertising ecosystem.

Why this matters: Current AI labeling policies across major platforms like Meta, TikTok and YouTube are inconsistent, creating potential brand safety issues.

  • Detection is unreliable unless AI metadata is embedded directly in the creative content itself.
  • The IAB wants to create “portable and consistent” transparency standards before regulators impose their own requirements.
  • As Giegerich noted: “If we don’t align now we’ll end up with 20 different versions of transparency, and one of them will mean anything.”

Industry response: Multiple organizations are developing complementary AI standards and guidance.

  • The Media Ratings Council (MRC), which sets measurement standards for digital advertising, is creating a comprehensive standalone AI standard, targeting market release within the next year.
  • The World Federation of Advertisers (WFA), a global association of national advertiser associations, operates an AI community of over 900 senior marketers and a steering team of 10 advanced brands including L’Oreal, Unilever, and Diageo.
  • WFA’s Gabrielle Robitaille explained their approach as “less about setting standards and more about surfacing” current best practices.

Key details: The MRC’s upcoming AI standard will address six critical areas.

  • Updates for measurement with zero-click search and agentic AI (AI systems that can act independently to complete tasks).
  • Invalid traffic considerations related to AI user agents and human pattern simulation.
  • Brand safety requirements for AI-generated content.
  • Transparency requirements for AI buying agents in auction systems.
  • Content provenance measurement and labeling (tracking the origin and authenticity of digital content).
  • Training, monitoring and disclosure requirements for AI detection systems.

The agency perspective: Media agencies are already steering ad dollars away from AI-generated “slop” content.

  • “Our guidance to advertisers has been, ‘we’ll keep an eye on it and create [exclusion lists] where we can,'” said David Dweck, general manager at Go Fish Digital, a media agency.
  • Karen Ram from Canvas, a creative agency, emphasized that “the whole thesis that people trust people, goes away when you bring AI into it.”
  • Some agencies are building or buying creator vetting systems, with New Engen partnering with DoDilly for brand safety monitoring.

What they’re saying: Industry leaders acknowledge the complexity of AI content oversight.

  • Scott Sutton, CEO of Later, a social media management platform, sees faceless AI creators as viable for “bottom-of-funnel campaigns where the goal is sales and product discovery more than it is brand awareness.”
  • Dweck compared the current situation to programmatic advertising’s early challenges, noting the reliance on human judgment to identify when AI creators have “crossed the line.”
The ad industry’s plan to define what counts as AI

Recent News

Bevel raises $10M for AI that unifies health tracking without expensive hardware

Users open the app eight times daily, transforming scattered health data into personalized insights.

Canva launches AI-powered Creative Operating System for complete marketing workflow

The company's proprietary Design Model learns from campaign performance to optimize future creative workflows.

Google strikes 25-year deal to restart nuclear plant for AI data centers

AI's massive energy appetite is making nuclear power economically viable again.