back
Get SIGNAL/NOISE in your inbox daily

YouTube steps up efforts to combat AI deepfakes with new removal policy, allowing individuals to request takedowns of unauthorized AI-generated content depicting them.

Key details of the updated policy: YouTube has implemented a new policy to address the rise of AI-generated content that mimics individuals without their consent:

  • Affected individuals can now request the removal of AI-generated content that realistically depicts them through YouTube’s privacy request process.
  • To qualify for removal, the content must depict a realistic altered or synthetic version of the individual’s likeness.
  • Content creators have two days to remove the likeness or the entire video after a complaint is received, after which YouTube will review and decide if the complaint has merit.

Balancing privacy concerns with creative expression: The decision to remove AI-generated content depends on various factors, ensuring a balance between protecting individual privacy and allowing for legitimate use cases:

  • Videos that acknowledge AI origins, are made as parodies or satires, or involve public figures engaged in criminal activity or endorsements may not be subject to removal.
  • Privacy complaints are separate from Community Guidelines strikes, though repeated privacy violations could lead to user bans.

Broader context of platforms grappling with AI content: YouTube’s updated policy is part of ongoing efforts by social media platforms to address the challenges posed by synthetic media and AI-generated content:

  • The rise of AI-generated content, including deepfakes, has led to privacy concerns and potential misuse on platforms like YouTube.
  • In March, YouTube introduced tools for creators to disclose when their content is made with synthetic media and is piloting a crowdsourced notes feature to indicate misleading AI content.

Implications for content creators and public figures: The new policy has implications for both content creators and public figures:

  • Content creators must be more cautious when using AI-generated content depicting individuals and ensure they have the necessary permissions or fall within the allowed use cases.
  • Public figures may have more difficulty getting AI-generated content removed, especially if it involves criminal activity or endorsements, highlighting the challenges in balancing privacy and public interest.

As AI-generated content continues to evolve, platforms like YouTube will need to continuously adapt their policies and tools to strike a balance between protecting individual privacy rights and fostering creative expression in the age of synthetic media.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...