×
Bipartisan Bill Aims to Carve Out AI Deepfakes from Section 230
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A bipartisan bill aims to combat AI deepfakes by amending Section 230 protections for tech platforms that fail to address the issue, potentially signaling a new approach to regulating online harms.

Key provisions of the Intimate Privacy Protection Act: The proposed legislation, introduced by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA), targets cyberstalking, intimate privacy violations, and digital forgeries:

  • The bill would amend Section 230 of the Communications Act of 1934, removing legal immunity for platforms that fail to combat these harms.
  • It establishes a “duty of care” for platforms, requiring them to have a reasonable process for addressing the specified issues, including measures to prevent privacy violations, a clear reporting mechanism, and a 24-hour removal process.

Bipartisan concern over AI deepfakes: Both Auchincloss and Hinson emphasized that tech platforms should not be able to use Section 230 as a shield against responsibility for the spread of malicious deepfakes and digital forgeries on their platforms:

  • Auchincloss stated, “Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms.”
  • Hinson added, “Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations.”

Growing momentum to combat intimate AI deepfakes: Lawmakers and companies appear motivated to address the issue of sexually explicit AI deepfakes:

Potential shift in Section 230 reform efforts: The Intimate Privacy Protection Act’s inclusion of a “duty of care” mirrors the approach used in the Kids Online Safety Act, which is expected to pass the Senate with overwhelming support:

  • This suggests that establishing a duty of care for platforms may be becoming a popular way to create new protections on the internet.
  • Historically, Republicans and Democrats have struggled to agree on how Section 230 should be changed, with FOSTA-SESTA (carving out sex trafficking charges from Section 230 protection) being a notable exception.

Broader implications for AI regulation and online harms: The bipartisan support for the Intimate Privacy Protection Act highlights the growing concern over the potential misuse of AI-generated content and the need for platforms to take responsibility in addressing these issues:

  • The bill’s focus on amending Section 230 protections could set a precedent for future legislation targeting online harms, potentially shifting the balance of liability between platforms and users.
  • As AI technologies continue to advance, policymakers will need to grapple with the challenges of regulating their use and mitigating potential harms, while also fostering innovation and protecting free speech.
  • The success of this bill and similar efforts may depend on striking the right balance between holding platforms accountable and preserving the benefits of Section 230 protections for the broader internet ecosystem.
Lawmakers want to carve out intimate AI deepfakes from Section 230 immunity

Recent News

Autonomous race car crashes at Abu Dhabi Racing League event

The first autonomous racing event at Suzuka highlighted persistent challenges in AI driving systems when a self-driving car lost control during warmup laps in controlled conditions.

What states may be missing in their rush to regulate AI

State-level AI regulations are testing constitutional precedents on free speech and commerce, as courts grapple with balancing innovation and public safety concerns.

The race to decode animal sounds into human language

New tools and prize money are driving rapid advances in understanding animal vocalizations, though researchers caution against expecting human-like language structures.