×
AI hallucination bug spreads malware through “slopsquatting”
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.

The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.

  • When AI models hallucinate non-existent software packages and a developer requests these components, attackers can serve malware instead of error messages.
  • The malicious code then becomes integrated into the final software product, often undetected by developers who trust their AI coding assistants.

The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.

  • CodeLlama 7B demonstrated the worst performance with a 25% hallucination rate when generating code.
  • Other problematic models include Mistral 7B and OpenChat 7B, which frequently create fictional package references.

Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.

  • A notable example was the “electorn” malware package, which mimicked the popular Electron application framework.
  • Modern application development’s heavy reliance on downloaded components (dependencies) makes these attacks particularly effective.

Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.

  • The rise of AI-assisted programming will likely increase these opportunistic attacks as more developers rely on automation.
  • The malware can be subtly integrated into applications, creating security risks for end users who have no visibility into the underlying code.

Where we go from here: Security researchers are developing countermeasures to address this emerging threat.

  • Efforts are focused on improving model fine-tuning to reduce hallucinations in the first place.
  • New package verification tools are being developed to identify these hallucinations before code enters production.
Slopsquatting: The worrying AI hallucination bug that could be spreading malware

Recent News

Google’s carbon emissions surge 51% as AI drives energy demand

Clean energy deployment can't keep pace with AI's voracious appetite for power.

Thailand launches first locally-operated hyperscale cloud platform

Thailand's first dSURE-certified cloud service targets government agencies and local businesses.

91% of orgs boost AI spending but 54% can’t deploy logistics tools

Companies invest heavily in AI while struggling to deploy it in real-world logistics operations.