back
Get SIGNAL/NOISE in your inbox daily

A bipartisan bill aims to combat AI deepfakes by amending Section 230 protections for tech platforms that fail to address the issue, potentially signaling a new approach to regulating online harms.

Key provisions of the Intimate Privacy Protection Act: The proposed legislation, introduced by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA), targets cyberstalking, intimate privacy violations, and digital forgeries:

  • The bill would amend Section 230 of the Communications Act of 1934, removing legal immunity for platforms that fail to combat these harms.
  • It establishes a “duty of care” for platforms, requiring them to have a reasonable process for addressing the specified issues, including measures to prevent privacy violations, a clear reporting mechanism, and a 24-hour removal process.

Bipartisan concern over AI deepfakes: Both Auchincloss and Hinson emphasized that tech platforms should not be able to use Section 230 as a shield against responsibility for the spread of malicious deepfakes and digital forgeries on their platforms:

  • Auchincloss stated, “Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms.”
  • Hinson added, “Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations.”

Growing momentum to combat intimate AI deepfakes: Lawmakers and companies appear motivated to address the issue of sexually explicit AI deepfakes:

Potential shift in Section 230 reform efforts: The Intimate Privacy Protection Act’s inclusion of a “duty of care” mirrors the approach used in the Kids Online Safety Act, which is expected to pass the Senate with overwhelming support:

  • This suggests that establishing a duty of care for platforms may be becoming a popular way to create new protections on the internet.
  • Historically, Republicans and Democrats have struggled to agree on how Section 230 should be changed, with FOSTA-SESTA (carving out sex trafficking charges from Section 230 protection) being a notable exception.

Broader implications for AI regulation and online harms: The bipartisan support for the Intimate Privacy Protection Act highlights the growing concern over the potential misuse of AI-generated content and the need for platforms to take responsibility in addressing these issues:

  • The bill’s focus on amending Section 230 protections could set a precedent for future legislation targeting online harms, potentially shifting the balance of liability between platforms and users.
  • As AI technologies continue to advance, policymakers will need to grapple with the challenges of regulating their use and mitigating potential harms, while also fostering innovation and protecting free speech.
  • The success of this bill and similar efforts may depend on striking the right balance between holding platforms accountable and preserving the benefits of Section 230 protections for the broader internet ecosystem.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...