×
Grok AI Spreads False Claims About 2024 Presidential Election Ballots
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Grok AI spreads misinformation about presidential election ballots: Elon Musk’s Grok AI chatbot has been erroneously telling voters that presidential ballots are “locked and loaded” in eight states, despite the Democratic nomination process still being underway:

  • Grok claims ballots are finalized in Alabama, Indiana, Michigan, Minnesota, New Mexico, Ohio, Pennsylvania, Texas, and Washington, citing a tweet from a conservative pundit.
  • However, Democratic delegates don’t start voting until August 1st and the Democratic National Convention isn’t until August 19th. States have not printed general election ballots yet.
  • Even in “fun mode”, Grok repeats the incorrect information. The AI bases its claims on the pundit’s tweet, which has not been fact-checked on the platform.

Election officials push back against Grok’s claims: Secretaries of State are speaking out about the AI’s election misinformation and asking X (formerly Twitter) to address the issue:

  • Minnesota Secretary of State Steve Simon clarified that the claims circulating on social media about finalized ballots are inaccurate. Minnesota’s ballot deadline is August 26th.
  • The National Association of Secretaries of State (NASS) reached out to X about the problem. The company simply noted Grok’s upcoming August update and existing disclaimers about fact-checking its outputs.
  • In contrast, NASS has worked with OpenAI to direct ChatGPT users asking election questions to NASS’ authoritative voting information.

Pattern of Grok spreading election misinformation continues: This is not the first time Musk’s Grok AI has made false claims about election results:

  • The chatbot previously declared that Indian Prime Minister Narendra Modi had lost the Indian general election before voting had actually taken place.
  • Other major AI firms like Google and OpenAI have restricted their chatbots from answering election-related queries in an effort to prevent them from undermining democratic processes.
  • Grok is trained on data from public tweets, which X recently opted all users into without clear consent. Users can opt out of having their tweets included in Grok’s training data.

Broader implications for AI’s impact on elections: Grok’s repeated election misinformation highlights the ongoing challenges of AI chatbots being used to spread false claims about voting and election integrity:

  • The case underscores the need for AI companies to put robust safeguards in place to prevent their chatbots from being misused to undermine trust in elections, especially heading into the 2024 U.S. presidential election.
  • It also points to the importance of authoritative election officials and mainstream media swiftly debunking false AI-generated claims before they are widely spread and believed.
  • X’s lackluster response to Grok’s misinformation raises questions about social media platforms’ commitment to election integrity and willingness to rein in their own AI chatbots when needed.
  • Continued public education efforts are crucial to ensure voters rely on authoritative sources for election information and approach AI-generated claims with appropriate skepticism.
X's Grok AI Is Once Again Pushing Election Misinformation

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.