AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.
The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.
The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.
Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.
Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.
Where we go from here: Security researchers are developing countermeasures to address this emerging threat.