Public skepticism of AI in elections: A new survey reveals that most Americans are wary of using artificial intelligence tools for obtaining election-related information, highlighting concerns about the technology’s reliability and potential for misinformation.
- According to a poll conducted by The Associated Press-NORC Center for Public Affairs Research and USAFacts, approximately two-thirds of US adults express little to no confidence in AI chatbots or search results providing reliable and factual information.
- The skepticism is particularly pronounced when it comes to high-stakes events such as elections, with many respondents doubting AI’s ability to deliver accurate information about voting processes and candidates.
- This wariness persists despite the increasing integration of AI-powered tools in personal and professional settings, suggesting a disconnect between adoption and trust in these technologies.
AI’s performance in election-related tasks: Recent evaluations have raised concerns about the accuracy and reliability of AI tools when handling basic election-related queries.
- Earlier this year, a gathering of election officials and AI researchers found that AI tools performed poorly when asked relatively simple questions, such as locating the nearest polling place.
- Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading inaccurate election information, prompting the platform to modify the tool to prioritize directing users to a federal government website for reliable information.
- These incidents underscore the challenges AI faces in providing accurate and trustworthy information about complex and sensitive topics like elections.
Public perception of AI’s impact on election information: The survey reveals a split in opinions regarding AI’s influence on access to accurate election information.
- Approximately 40% of Americans believe that the use of AI will make it more difficult to find factual information about the 2024 election.
- Another 40% are uncertain about AI’s impact, stating that it won’t necessarily make finding accurate information easier or more challenging.
- Only 16% of respondents believe that AI will make it easier to access accurate election information, highlighting a significant lack of confidence in the technology’s potential benefits in this context.
Concerns about AI-generated misinformation: The poll also sheds light on growing worries about the potential for AI to create and spread misleading content during election cycles.
- Some respondents, like 21-year-old Griffin Ryan, express particular concern about AI-generated deepfakes and AI-fueled bot accounts on social media potentially swaying voter opinions.
- There have already been instances of AI being used to create fake images of prominent candidates, reinforcing negative narratives or spreading false information.
- A notable example includes AI-generated robocalls imitating President Joe Biden’s voice to discourage voters from participating in New Hampshire’s January primary.
Preferred sources of election information: The survey highlights a preference for traditional and official sources of election information among many Americans.
- Some respondents, like 71-year-old Bevellie Harris, prefer obtaining election information from official government sources, such as voter pamphlets mailed to citizens before elections.
- Others rely on mainstream news outlets, including CNN, BBC, NPR, The New York Times, and The Wall Street Journal, for election-related news and information.
- This preference for established sources suggests a continued trust in traditional media and government communications for crucial civic information.
Broader implications for AI adoption and trust: The survey’s findings point to larger issues surrounding the integration of AI technologies in sensitive areas of public life.
- The discrepancy between AI’s increasing presence in daily life and the public’s lack of trust in its capabilities for important tasks like election information dissemination highlights a significant challenge for AI developers and policymakers.
- As AI continues to advance, addressing concerns about reliability, accuracy, and the potential for misuse will be crucial for building public trust and ensuring responsible deployment of these technologies in critical domains like elections and civic engagement.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...