AI’s growing role in political advertising: Recent advancements in generative artificial intelligence (AI) technologies have prompted lawmakers and regulators to address potential risks in the context of political advertising, particularly as the 2024 election cycle approaches.
- State and federal efforts are underway to regulate the use of AI in political ads, with a focus on transparency and preventing the spread of misinformation.
- As of August 2024, 16 states have adopted laws governing AI-generated content in political advertising, while another 16 states have bills under consideration.
- The Federal Communications Commission (FCC) has proposed a new rule requiring television and radio broadcast stations to disclose AI-generated content in political ads.
State-level regulations: Most state laws focus on disclosure requirements rather than outright bans on AI-generated content in political advertising.
- The majority of states with new laws require clear and conspicuous disclosure when political ads include AI-generated or manipulated content.
- Some states only impose disclosure requirements during specific “electioneering” periods before primary or general elections.
- Minnesota stands out by prohibiting the circulation of unauthorized deepfakes of political candidates intended to influence elections or harm candidates.
- Many states allow candidates depicted in deceptive AI-generated content to seek injunctive relief or damages against ad sponsors.
FCC’s proposed rule: The Federal Communications Commission is taking steps to address AI-generated content in broadcast political advertising.
- The proposed rule would require television and radio stations to include standardized on-air disclosures for political ads containing AI-generated content.
- Broadcasters would be required to ask advertisers about the use of AI in their ads and adopt a uniform disclosure message.
- The FCC aims to inform consumers about AI-generated content without banning such advertising or judging its truthfulness.
- Due to the lengthy rulemaking process, this rule is unlikely to be in effect before the 2024 general election.
Challenges at the federal level: Not all efforts to regulate AI in political advertising have been successful.
- The Federal Election Commission (FEC) is expected to close a petition for new rules on AI-generated campaign ads without further action.
- Congressional attempts to pass legislation requiring disclaimers on AI-generated political ads have stalled.
- The proposed “NO FAKES Act,” which would establish a federal right of publicity law for digital replicas, is unlikely to impact the upcoming election.
Social media and user-generated content: The impact of new regulations on AI-generated political content spread through social media remains limited.
- Many state laws and proposed regulations focus on traditional political advertising, potentially leaving a gap in addressing AI-generated memes and user-created content on social media platforms.
- AI providers like Google, OpenAI, Meta, and Anthropic have implemented policies and technical barriers to restrict the creation of deceptive deepfakes.
- Some platforms, like X (formerly Twitter), take a more permissive approach to AI-generated content in the name of free speech.
Looking ahead: The evolving landscape of AI regulation in political advertising presents both opportunities and challenges.
- While new laws and regulations aim to increase transparency and combat misinformation, their effectiveness in addressing AI-generated content spread through social media and other online channels remains uncertain.
- The ongoing development of AI technologies and the approaching 2024 election cycle will likely continue to drive discussions around appropriate regulatory measures and industry best practices.
- Balancing free speech concerns with the need to protect voters from deceptive AI-generated content will remain a key challenge for lawmakers, regulators, and technology companies in the coming years.
AI In Political Advertising: State And Federal Regulations In Focus