×
AI Threatens Election Integrity, California AG Warns Tech Giants
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

California AG addresses AI and tech companies on election integrity: Rob Bonta, California’s Attorney General, has issued a warning to major technology and artificial intelligence companies regarding the potential spread of election misinformation through their platforms.

  • Bonta sent letters to the CEOs of Alphabet, Meta, Microsoft, OpenAI, Reddit, TikTok, X, and YouTube, emphasizing the importance of preventing voter intimidation, deception, and dissuasion.
  • The timing of the letters coincides with the first televised debate between Vice President Kamala Harris and former President Donald Trump, highlighting the relevance of the issue in the current political climate.

Key legal considerations: The Attorney General’s letter outlines specific prohibitions under California’s election code that tech companies must be aware of and enforce on their platforms.

  • Companies are prohibited from disseminating intentionally false information about voter eligibility or polling locations.
  • Posting misleading media about candidates within 60 days of an election is not allowed.
  • Voter intimidation and bribery to influence voting decisions are strictly forbidden under state election laws.

Tech companies’ role in information dissemination: Bonta emphasized the critical position these companies hold in shaping public opinion and providing election-related information.

  • The platforms are primary sources of news and guidance about elections for many Californians.
  • Tech companies are well-positioned to ensure users have access to accurate voting information.
  • Bonta urged the companies to train experts specifically to identify and combat voter deception on their platforms.

AI-related concerns: The letter highlights recent incidents where artificial intelligence has been used to interfere with election processes.

  • A notable example was an AI-generated version of President Joe Biden’s voice used to discourage voting in the New Hampshire Democratic primary.
  • Some AI image generators, like xAI’s Grok-2, have drawn criticism for their willingness to depict political figures in compromising situations.
  • Other AI companies, such as OpenAI, have implemented stricter controls on political content generation.

Platform-specific issues: The Attorney General’s letter addresses concerns about content moderation policies and resources across different platforms.

  • X (formerly Twitter) has faced scrutiny for reducing its content moderation teams following Elon Musk’s acquisition.
  • Alphabet and Meta have also been criticized for layoffs affecting their content moderation capabilities.
  • YouTube reversed a policy that removed content disputing the 2020 election results, now allowing some false claims to remain online.

Company responses: Several companies addressed in the letter have provided statements or referenced existing policies related to election integrity.

  • X acknowledged the Attorney General’s concerns and expressed willingness to continue communication on these challenges.
  • Meta pointed to its policy of labeling AI-generated content across its platforms.
  • Reddit highlighted its content policy prohibiting AI-generated misleading content and its use of AI to flag harmful material.

Legislative action: California lawmakers are also taking steps to address AI-driven political misinformation.

  • AB 2655, a bill awaiting Governor Gavin Newsom’s decision, would require large social media platforms to remove political deepfakes intended to mislead voters within a specified period before elections.

Broader implications: The Attorney General’s warning underscores the growing concern about the impact of AI and social media on election integrity.

  • As AI technology becomes more sophisticated, the potential for creating and spreading convincing misinformation increases.
  • The responsibility of tech companies in maintaining the integrity of public discourse and democratic processes is under intensifying scrutiny.
  • Balancing free speech with the need to combat deliberate misinformation remains a complex challenge for both lawmakers and tech platforms.
California AG warns big tech and AI companies about pushing election falsehoods

Recent News

Autonomous race car crashes at Abu Dhabi Racing League event

The first autonomous racing event at Suzuka highlighted persistent challenges in AI driving systems when a self-driving car lost control during warmup laps in controlled conditions.

What states may be missing in their rush to regulate AI

State-level AI regulations are testing constitutional precedents on free speech and commerce, as courts grapple with balancing innovation and public safety concerns.

The race to decode animal sounds into human language

New tools and prize money are driving rapid advances in understanding animal vocalizations, though researchers caution against expecting human-like language structures.