back
Get SIGNAL/NOISE in your inbox daily

AI’s limited impact on European elections: Recent research suggests that AI-generated falsehoods and deepfakes had minimal effect on election results in the UK, France, and the European Parliament in 2024.

  • Sam Stockwell, a researcher at the Alan Turing Institute, conducted a study focusing on three elections over a four-month period from May to August 2024.
  • The study identified only 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and 11 cases in the EU and French elections combined.
  • None of these cases appeared to definitively sway the election results.

Reasons for AI’s ineffectiveness: The study reveals several factors contributing to the limited impact of AI-generated content on election outcomes.

  • Most people exposed to AI-generated disinformation already held beliefs aligned with the underlying message, such as concerns about high immigration levels.
  • Those actively engaging with and amplifying deepfake messages typically had preexisting views that aligned with the content, reinforcing existing beliefs rather than influencing undecided voters.
  • Traditional election interference tactics, like using bots to flood comment sections and exploiting influencers to spread falsehoods, remained more effective than AI-generated content.

Current use of AI in disinformation: While AI tools were employed in some capacity, their use was limited and not significantly more effective than traditional methods.

  • Bad actors primarily used generative AI to rewrite news articles with their own spin or create additional online content for disinformation purposes.
  • Felix Simon, a researcher at the Reuters Institute for Journalism, notes that AI is not providing much advantage currently, as simpler methods of creating false or misleading information continue to be prevalent.

Challenges in assessing AI’s impact: Experts caution that it’s still difficult to draw firm conclusions about AI’s influence on elections at this stage.

  • Samuel Woolley, a disinformation expert at the University of Pittsburgh, points out the lack of sufficient data and the potential for less obvious, downstream impacts on civic engagement.
  • Stockwell acknowledges that early evidence suggests AI-generated content could be more effective for harassing politicians and sowing confusion than changing people’s opinions on a large scale.

Emerging concerns: The research highlights potential long-term risks associated with AI-generated content in the political sphere.

  • Politicians, including former UK Prime Minister Rishi Sunak, were targeted by AI deepfakes showing them promoting scams or admitting to financial corruption.
  • Female candidates faced nonconsensual sexual deepfake content intended to disparage and intimidate them.
  • The increasing difficulty for people to discern between authentic and AI-generated content in the election context raises concerns about the integrity of political processes.

Political exploitation of AI: Some politicians have begun to take advantage of the confusion surrounding AI-generated content.

  • In the European Parliament elections in France, political candidates shared AI-generated content amplifying anti-immigration narratives without disclosing its artificial origin.
  • Felix Simon warns that this covert engagement and lack of transparency by political actors may present a greater risk to the integrity of political processes than the use of AI by the general population or “bad actors.”

Looking ahead: Potential long-term consequences: While the immediate impact of AI on elections appears limited, researchers warn of potential future risks to democratic processes.

  • The ongoing harassment and targeting of politicians with AI-generated content could have a chilling effect on their willingness to participate in future elections and harm their well-being.
  • The increasing difficulty in distinguishing between real and AI-generated content may erode trust in political information and institutions over time.
  • As AI technology continues to advance, its potential to influence elections and political discourse may grow, necessitating ongoing vigilance and research into its effects on democratic processes.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...