back
Get SIGNAL/NOISE in your inbox daily

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.

The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.

  • When AI models hallucinate non-existent software packages and a developer requests these components, attackers can serve malware instead of error messages.
  • The malicious code then becomes integrated into the final software product, often undetected by developers who trust their AI coding assistants.

The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.

  • CodeLlama 7B demonstrated the worst performance with a 25% hallucination rate when generating code.
  • Other problematic models include Mistral 7B and OpenChat 7B, which frequently create fictional package references.

Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.

  • A notable example was the “electorn” malware package, which mimicked the popular Electron application framework.
  • Modern application development’s heavy reliance on downloaded components (dependencies) makes these attacks particularly effective.

Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.

  • The rise of AI-assisted programming will likely increase these opportunistic attacks as more developers rely on automation.
  • The malware can be subtly integrated into applications, creating security risks for end users who have no visibility into the underlying code.

Where we go from here: Security researchers are developing countermeasures to address this emerging threat.

  • Efforts are focused on improving model fine-tuning to reduce hallucinations in the first place.
  • New package verification tools are being developed to identify these hallucinations before code enters production.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...