×
Bipartisan Bill Aims to Carve Out AI Deepfakes from Section 230
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A bipartisan bill aims to combat AI deepfakes by amending Section 230 protections for tech platforms that fail to address the issue, potentially signaling a new approach to regulating online harms.

Key provisions of the Intimate Privacy Protection Act: The proposed legislation, introduced by Reps. Jake Auchincloss (D-MA) and Ashley Hinson (R-IA), targets cyberstalking, intimate privacy violations, and digital forgeries:

  • The bill would amend Section 230 of the Communications Act of 1934, removing legal immunity for platforms that fail to combat these harms.
  • It establishes a “duty of care” for platforms, requiring them to have a reasonable process for addressing the specified issues, including measures to prevent privacy violations, a clear reporting mechanism, and a 24-hour removal process.

Bipartisan concern over AI deepfakes: Both Auchincloss and Hinson emphasized that tech platforms should not be able to use Section 230 as a shield against responsibility for the spread of malicious deepfakes and digital forgeries on their platforms:

  • Auchincloss stated, “Congress must prevent these corporations from evading responsibility over the sickening spread of malicious deepfakes and digital forgeries on their platforms.”
  • Hinson added, “Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations.”

Growing momentum to combat intimate AI deepfakes: Lawmakers and companies appear motivated to address the issue of sexually explicit AI deepfakes:

Potential shift in Section 230 reform efforts: The Intimate Privacy Protection Act’s inclusion of a “duty of care” mirrors the approach used in the Kids Online Safety Act, which is expected to pass the Senate with overwhelming support:

  • This suggests that establishing a duty of care for platforms may be becoming a popular way to create new protections on the internet.
  • Historically, Republicans and Democrats have struggled to agree on how Section 230 should be changed, with FOSTA-SESTA (carving out sex trafficking charges from Section 230 protection) being a notable exception.

Broader implications for AI regulation and online harms: The bipartisan support for the Intimate Privacy Protection Act highlights the growing concern over the potential misuse of AI-generated content and the need for platforms to take responsibility in addressing these issues:

  • The bill’s focus on amending Section 230 protections could set a precedent for future legislation targeting online harms, potentially shifting the balance of liability between platforms and users.
  • As AI technologies continue to advance, policymakers will need to grapple with the challenges of regulating their use and mitigating potential harms, while also fostering innovation and protecting free speech.
  • The success of this bill and similar efforts may depend on striking the right balance between holding platforms accountable and preserving the benefits of Section 230 protections for the broader internet ecosystem.
Lawmakers want to carve out intimate AI deepfakes from Section 230 immunity

Recent News

‘Agent orchestration’ is the backbone of business ops in the AI era — here’s why

Agent orchestration leverages AI to actively manage interactions and optimize data flow across enterprise systems, promising more responsive and adaptive business environments.

This startup is using AI to help patients decode their X-rays

AI-powered dental imaging system enhances X-rays to improve patient understanding and treatment decisions.

MIT’s latest breakthrough is tiny, but it has big implications for the semiconductor industry

The novel 3D nanoscale transistor design could overcome silicon's physical limitations, potentially leading to more efficient and powerful electronic devices.