back
Get SIGNAL/NOISE in your inbox daily

California AG addresses AI and tech companies on election integrity: Rob Bonta, California’s Attorney General, has issued a warning to major technology and artificial intelligence companies regarding the potential spread of election misinformation through their platforms.

  • Bonta sent letters to the CEOs of Alphabet, Meta, Microsoft, OpenAI, Reddit, TikTok, X, and YouTube, emphasizing the importance of preventing voter intimidation, deception, and dissuasion.
  • The timing of the letters coincides with the first televised debate between Vice President Kamala Harris and former President Donald Trump, highlighting the relevance of the issue in the current political climate.

Key legal considerations: The Attorney General’s letter outlines specific prohibitions under California’s election code that tech companies must be aware of and enforce on their platforms.

  • Companies are prohibited from disseminating intentionally false information about voter eligibility or polling locations.
  • Posting misleading media about candidates within 60 days of an election is not allowed.
  • Voter intimidation and bribery to influence voting decisions are strictly forbidden under state election laws.

Tech companies’ role in information dissemination: Bonta emphasized the critical position these companies hold in shaping public opinion and providing election-related information.

  • The platforms are primary sources of news and guidance about elections for many Californians.
  • Tech companies are well-positioned to ensure users have access to accurate voting information.
  • Bonta urged the companies to train experts specifically to identify and combat voter deception on their platforms.

AI-related concerns: The letter highlights recent incidents where artificial intelligence has been used to interfere with election processes.

  • A notable example was an AI-generated version of President Joe Biden’s voice used to discourage voting in the New Hampshire Democratic primary.
  • Some AI image generators, like xAI’s Grok-2, have drawn criticism for their willingness to depict political figures in compromising situations.
  • Other AI companies, such as OpenAI, have implemented stricter controls on political content generation.

Platform-specific issues: The Attorney General’s letter addresses concerns about content moderation policies and resources across different platforms.

  • X (formerly Twitter) has faced scrutiny for reducing its content moderation teams following Elon Musk’s acquisition.
  • Alphabet and Meta have also been criticized for layoffs affecting their content moderation capabilities.
  • YouTube reversed a policy that removed content disputing the 2020 election results, now allowing some false claims to remain online.

Company responses: Several companies addressed in the letter have provided statements or referenced existing policies related to election integrity.

  • X acknowledged the Attorney General’s concerns and expressed willingness to continue communication on these challenges.
  • Meta pointed to its policy of labeling AI-generated content across its platforms.
  • Reddit highlighted its content policy prohibiting AI-generated misleading content and its use of AI to flag harmful material.

Legislative action: California lawmakers are also taking steps to address AI-driven political misinformation.

  • AB 2655, a bill awaiting Governor Gavin Newsom’s decision, would require large social media platforms to remove political deepfakes intended to mislead voters within a specified period before elections.

Broader implications: The Attorney General’s warning underscores the growing concern about the impact of AI and social media on election integrity.

  • As AI technology becomes more sophisticated, the potential for creating and spreading convincing misinformation increases.
  • The responsibility of tech companies in maintaining the integrity of public discourse and democratic processes is under intensifying scrutiny.
  • Balancing free speech with the need to combat deliberate misinformation remains a complex challenge for both lawmakers and tech platforms.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...