A new study from Google’s DeepMind reveals that the most common misuse of AI is creating political deepfakes to sway public opinion, raising concerns about the impact on elections and the spread of misinformation.
Key findings: The research, conducted in collaboration with Google’s Jigsaw unit, analyzed around 200 incidents of AI misuse and found that:
- Creating realistic fake images, videos, and audio of politicians and celebrities was the most prevalent misuse, nearly twice as common as the next highest category.
- Shaping public opinion was the primary goal, accounting for 27% of misuse cases, followed by financial gain through services like generating deepfakes or fake news articles.
- Most incidents involved easily accessible tools requiring minimal technical expertise, enabling a wider range of bad actors to misuse AI.
Implications for elections and democracy: The prevalence of political deepfakes is particularly concerning given their potential to influence voters and distort the collective understanding of sociopolitical reality:
- Deepfakes of global leaders, including UK Prime Minister Rishi Sunak, have appeared on various social media platforms in recent months, coinciding with upcoming elections.
- Despite efforts by platforms to label or remove such content, there are fears that audiences may not recognize the fakes, and their dissemination could sway voters.
- Ardi Janjeva from The Alan Turing Institute emphasized the long-term risks to democracies posed by the distortion of publicly accessible information through AI-generated content.
Industry response and future steps: As major tech companies rush to release generative AI products to the public, they are beginning to monitor the flood of misinformation and harmful content created by their tools:
- OpenAI recently revealed that operations linked to Russia, China, Iran, and Israel had been using its tools to create and spread disinformation.
- Google DeepMind’s research will influence how it improves its own model safety evaluations and aims to shape how competitors and stakeholders view the manifestation of AI-related harms.
- The findings highlight the need for continued monitoring, research, and development of strategies to mitigate the misuse of AI, particularly in the context of political manipulation.
Analyzing deeper: While the study sheds light on the current landscape of AI misuse, it also underscores the ongoing challenge of staying ahead of malicious actors as the technology becomes more accessible and sophisticated. As generative AI tools become more integrated into various aspects of society, it is crucial for policymakers, tech companies, and the public to remain vigilant and proactive in addressing the potential risks to democracy and the integrity of information. Collaboration between stakeholders and continued research will be essential in developing effective countermeasures and fostering a more resilient information ecosystem in the face of AI-powered manipulation.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...