×
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Leading AI chatbots are spreading Russian misinformation, raising concerns about the reliability and potential dangers of these increasingly popular tools, especially in the context of upcoming elections worldwide.

Key findings from NewsGuard’s study: NewsGuard, a media watchdog organization, conducted a study revealing that top AI chatbots are spreading Russian disinformation at an alarming rate:

  • When presented with 57 prompts related to narratives created by John Mark Dougan, an American fugitive spreading misinformation from Moscow, the chatbots repeated the false information 32% of the time.
  • The chatbots cited Dougan’s fake local news sites as reliable sources, presenting fabricated reports about a supposed wiretap at Mar-a-Lago and a nonexistent Ukrainian troll factory interfering with U.S. elections as fact.
  • The study included leading chatbots such as OpenAI’s ChatGPT-4, You.com’s Smart Assistant, Grok, Inflection, Mistral, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google Gemini, and Perplexity.

Concerns in the context of upcoming elections: The spread of misinformation by AI chatbots is particularly worrying given the upcoming U.S. presidential election and the fact that over a billion people worldwide will be voting in various elections this year:

  • Sen. Mark Warner (D-Va.), chair of the Senate Intelligence Committee, expressed concern about the potential increase in misinformation efforts compared to previous election cycles, stating, “This is a real threat at a time when, frankly, Americans are more willing to believe crazy conspiracy theories than ever before.”
  • Despite commitments from leading AI companies to curb the spread of deepfakes and election-related misinformation, Warner remains skeptical about the progress made since the Munich Security Conference earlier this year.

NewsGuard under scrutiny: NewsGuard finds itself under investigation by House Oversight Committee Chair James Comer (R-Ky.), who has raised concerns about the organization’s potential to serve as a “non-transparent agent of censorship campaigns”:

  • NewsGuard rejects these assertions, stating that the committee misunderstands its work with the Defense Department, which is unrelated to rating news sources and focuses on countering disinformation efforts by foreign government-linked operations.
  • The organization plans to address the committee’s misunderstandings while defending its First Amendment rights as a journalism organization.

Broader implications and critical analysis: The NewsGuard study highlights the urgent need for AI companies to prioritize the accuracy and reliability of information provided by their chatbots, especially when it comes to news and controversial topics. As these tools gain popularity, it is crucial to ensure that they do not become vehicles for spreading disinformation and propaganda.

While AI chatbots have the potential to revolutionize the way we access information, their current vulnerability to misinformation raises questions about their trustworthiness and the potential consequences of relying on them for critical information. As the U.S. presidential election and other global elections approach, it is essential for AI companies, policymakers, and users to remain vigilant and proactive in combating the spread of false information through these powerful tools.

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.