×
How YouTube is giving creators more control over their likenesses
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The increasing pervasiveness of AI technology in the creator economy has created new challenges for content creators and celebrities dealing with unauthorized AI-generated replicas of their likenesses across social media platforms.

Latest developments: YouTube is forming a strategic partnership with Creative Artists Agency (CAA) to develop and test tools for identifying AI-generated content that uses celebrities’ and creators’ likenesses without permission.

  • The initial testing phase will begin in early 2024, focusing on celebrities and athletes before expanding to top YouTube creators and other creative professionals
  • The technology will help identify AI-generated content that mimics faces, voices, and other personal characteristics
  • Content creators will be able to submit removal requests for unauthorized AI-generated content featuring their likeness

Technical capabilities: The platform is developing sophisticated detection systems to identify and manage various forms of AI-generated content across its ecosystem.

  • YouTube is working on “synthetic-singing identification technology” specifically designed to detect AI-generated replicas of creators’ singing voices
  • The system will integrate with CAA’s existing CAAVault technology, which maintains a database of clients’ digital likenesses including faces, bodies, and voices
  • These tools will allow for management of AI-generated content “at scale,” suggesting automated detection and response capabilities

Existing measures: YouTube has already implemented several policies and tools to address AI-generated content on its platform.

  • Music labels currently have the ability to request removal of AI content that simulates artists’ voices
  • The platform requires creators to disclose when their content contains AI-generated elements
  • These existing policies provide a foundation for the expanded protection system being developed

Future implications: The development of AI detection tools marks a significant step in protecting intellectual property rights in the digital age, though questions remain about implementation and effectiveness.

  • The success of these tools could set a precedent for how other platforms handle AI-generated content
  • The phased rollout approach will allow YouTube to refine the system before broader implementation
  • The growing sophistication of AI technology may lead to an ongoing arms race between detection tools and increasingly realistic AI-generated content
YouTube says that soon, its tech will be able to find AI copies of celebs and creators

Recent News

Apple’s cheapest iPad is bad for AI

Apple's budget tablet lacks sufficient RAM to run upcoming AI features, widening the gap with pricier models in the lineup.

Mira Murati’s AI venture recruits ex-OpenAI leader among first hires

Former OpenAI exec's new AI startup lures top talent and seeks $100 million in early funding.

Microsoft is cracking down on malicious actors who bypass Copilot’s safeguards

Tech giant targets cybercriminals who created and sold tools to bypass AI security measures and generate harmful content.