×
FCC Proposes AI Disclosure Rule for Robocalls, Aiming to Protect Consumers
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The FCC chair has proposed a new rule requiring robocalls to disclose the use of artificial intelligence (AI), aiming to protect consumers and enable informed decisions regarding these automated calls.

Key elements of the proposed rule:

  • Callers would need to obtain prior express consent to disclose their intent to use AI-generated calls.
  • On each call, callers would be required to disclose to consumers when they are receiving an AI-generated call.
  • The rule would provide a definition for AI-generated calls as the FCC seeks to establish guardrails around the use of this emerging technology in robocalls.

Rationale behind the proposal: FCC Chair Jessica Rosenworcel emphasized the need for these measures, stating that bad actors are already employing AI technology in robocalls to mislead consumers and spread misinformation. The proposed rule aims to empower consumers to avoid unwanted calls and make informed choices.

Next steps for the proposed rule:

  • The full commission will consider the proposal during their August meeting.
  • If adopted, the rule would build upon a previous FCC regulation from February 2024, which banned the use of AI voices in robocalls and granted the agency the power to fine robocallers and block calls from telephone carriers facilitating illegal robocalls.

Broader implications: As AI technologies continue to advance and become more accessible, regulatory bodies like the FCC are taking proactive steps to ensure consumer protection and transparency in the face of potential misuse. This proposed rule represents an effort to strike a balance between technological innovation and safeguarding the public from deceptive or unwanted automated communications.

FCC chair proposes rule requiring AI disclosures on robocalls

Recent News

North Korea unveils AI-equipped suicide drones amid deepening Russia ties

North Korea's AI-equipped suicide drones reflect growing technological cooperation with Russia, potentially destabilizing security in an already tense Korean peninsula.

Rookie mistake: Police recruit fired for using ChatGPT on academy essay finds second chance

A promising police career was derailed then revived after an officer's use of AI revealed gaps in how law enforcement is adapting to new technology.

Auburn University launches AI-focused cybersecurity center to counter emerging threats

Auburn's new center brings together experts from multiple disciplines to develop defensive strategies against the rising tide of AI-powered cyber threats affecting 78 percent of security officers surveyed.