×
Bipartisan Bill Aims to Carve Out AI Deepfakes from Section 230
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A bipartisan bill aims to combat AI deepfakes by amending Section 230 protections for tech platforms that fail to address the issue, potentially signaling a new approach to regulating online harms.

Key provisions of the Intimate Privacy Protection Act: The proposed legislation, introduced by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA), targets cyberstalking, intimate privacy violations, and digital forgeries:

  • The bill would amend Section 230 of the Communications Act of 1934, removing legal immunity for platforms that fail to combat these harms.
  • It establishes a “duty of care” for platforms, requiring them to have a reasonable process for addressing the specified issues, including measures to prevent privacy violations, a clear reporting mechanism, and a 24-hour removal process.

Bipartisan concern over AI deepfakes: Both Auchincloss and Hinson emphasized that tech platforms should not be able to use Section 230 as a shield against responsibility for the spread of malicious deepfakes and digital forgeries on their platforms:

  • Auchincloss stated, “Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms.”
  • Hinson added, “Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations.”

Growing momentum to combat intimate AI deepfakes: Lawmakers and companies appear motivated to address the issue of sexually explicit AI deepfakes:

Potential shift in Section 230 reform efforts: The Intimate Privacy Protection Act’s inclusion of a “duty of care” mirrors the approach used in the Kids Online Safety Act, which is expected to pass the Senate with overwhelming support:

  • This suggests that establishing a duty of care for platforms may be becoming a popular way to create new protections on the internet.
  • Historically, Republicans and Democrats have struggled to agree on how Section 230 should be changed, with FOSTA-SESTA (carving out sex trafficking charges from Section 230 protection) being a notable exception.

Broader implications for AI regulation and online harms: The bipartisan support for the Intimate Privacy Protection Act highlights the growing concern over the potential misuse of AI-generated content and the need for platforms to take responsibility in addressing these issues:

  • The bill’s focus on amending Section 230 protections could set a precedent for future legislation targeting online harms, potentially shifting the balance of liability between platforms and users.
  • As AI technologies continue to advance, policymakers will need to grapple with the challenges of regulating their use and mitigating potential harms, while also fostering innovation and protecting free speech.
  • The success of this bill and similar efforts may depend on striking the right balance between holding platforms accountable and preserving the benefits of Section 230 protections for the broader internet ecosystem.
Lawmakers want to carve out intimate AI deepfakes from Section 230 immunity

Recent News

Trump pledges to reverse Biden’s AI policies amid global safety talks

Trump's vow to dismantle AI safeguards collides with the tech industry's growing acceptance of federal oversight and international safety standards.

AI predicts behavior of 1000 people in simulation study

Stanford researchers demonstrate AI models can now accurately mimic human decision-making patterns across large populations, marking a significant shift from traditional survey methods.

Strava limits third-party access to user fitness data

Popular workout-tracking platform restricts third-party access to user data, forcing fitness apps to find alternative data sources or scale back social features.