back
Get SIGNAL/NOISE in your inbox daily

The Take It Down Act marks a pivotal federal response to the proliferation of AI-generated explicit imagery, creating the first nationwide protections against non-consensual deepfakes. After high-profile victims from celebrities to high school students suffered from having their faces superimposed onto nude bodies, this bipartisan legislation establishes clear criminal penalties and platform responsibilities. This rare moment of congressional unity illustrates how certain AI harms can transcend political divisions, particularly when targeting vulnerable individuals.

The big picture: President Trump is set to sign the Take It Down Act on Monday, establishing federal protections against non-consensual explicit images regardless of whether they’re authentic or AI-generated.

  • The law makes sharing such images illegal and requires platforms to remove them within 48 hours of notification.
  • This legislation represents one of the first federal laws specifically addressing AI-generated content harms as the technology rapidly advances.

Why this matters: Prior to this law, protections for adult victims of explicit deepfakes varied widely by state, creating an inconsistent patchwork of accountability.

  • Federal law previously only prohibited AI-generated explicit images of children, leaving adult victims with limited recourse.
  • The new legislation provides law enforcement with clear guidance on how to prosecute these violations.

Notable support: The Take It Down Act passed nearly unanimously through Congress with only two House representatives dissenting, demonstrating rare bipartisan consensus.

  • More than 100 organizations endorsed the legislation, including nonprofits and major tech companies like Meta, TikTok, and Google.
  • The bill was first introduced last summer by Republican Senator Ted Cruz and Democratic Senator Amy Klobuchar.

The catalyst: The legislation gained momentum after Texas high schooler Elliston Berry became a victim when a classmate used AI to create and share a fake nude image of her on Snapchat.

  • “Everyday I’ve had to live with the fear of these photos getting brought up or resurfacing,” Berry told CNN last year.
  • Berry expressed relief that the legislation would ensure perpetrators face consequences.

Platform policies: Some technology companies had already implemented measures to address this issue before the federal mandate.

  • Google, Meta, and Snapchat have existing systems where users can request removal of explicit images.
  • Apple and Google have worked to remove AI services that convert clothed images into manipulated nude ones from their app stores and search results.

What they’re saying: Advocates see the legislation as a clear-cut case for regulating harmful AI applications.

  • “AI is new to a lot of us and so I think we’re still figuring out what is helpful to society, what is harmful to society, but (non-consensual) intimate deepfakes are such a clear harm with no benefit,” said Ilana Beller of Public Citizen.
  • Imran Ahmed, CEO of the Center for Countering Digital Hate, stated the law “finally compels social media bros to do their jobs and protect women from highly intimate and invasive breaches of their rights.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...