'Slopsquatting': New Software Threat Emerges from AI Hallucinations, Threatening CPG Industry Security
April 14, 2025
To combat slopsquatting, developers are encouraged to adopt security measures such as verifying package sources, employing automated tools to detect potential threats, and implementing stringent tracking and verification processes for code dependencies.
Given that surveys indicate up to 97% of developers utilize GenAI tools, the need for effective error mitigation strategies is critical to safeguard against the risks posed by AI hallucinations.
For executives in the Consumer Packaged Goods (CPG) industry, slopsquatting presents significant financial, reputational, and regulatory risks, especially amid ongoing tariff challenges and declining consumer trust.
Experts recommend that CPG leaders implement dependency verification policies, enhance developer training, and prepare robust incident response plans to address the challenges posed by slopsquatting.
To reduce hallucinations, researchers suggest employing prompt engineering techniques such as Retrieval Augmented Generation (RAG), self-refinement, and fine-tuning of LLMs.
Ultimately, the only reliable way to mitigate slopsquatting risks is through manual verification of package names, as assuming that AI-generated code snippets are safe can lead to significant vulnerabilities.
As the software development landscape evolves, the balance between innovation through automation and the responsibility to secure outputs becomes increasingly critical.
The study revealed that nearly 20% of the packages recommended by LLMs do not exist, with open-source models exhibiting a higher frequency of hallucinations compared to their commercial counterparts.
Recent research from a collaboration between the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma has unveiled a new software supply chain threat known as 'slopsquatting,' where code-generating Large Language Models (LLMs) hallucinate fictitious package names that can be exploited by malicious actors.
While LLMs like Copilot and ChatGPT enhance productivity in software development, they also introduce new risks, necessitating a careful approach to package management.
The research highlights that 58% of hallucinated package names are repeated across multiple iterations, indicating a consistent problem that could be exploited by attackers.
As CPG companies grapple with high tariffs and reduced consumer spending, the emergence of slopsquatting could further exacerbate their vulnerabilities and operational disruptions.
Summary based on 12 sources
Get a daily email with more AI stories
Sources

TechRadar pro • Apr 14, 2025
"Slopsquatting" attacks are using AI-hallucinated names resembling popular libraries to spread malware
CSO Online • Apr 14, 2025
AI hallucinations lead to a new cyber threat: Slopsquatting
BleepingComputer • Apr 11, 2025
AI-hallucinated code dependencies become new supply chain risk
SecurityWeek • Apr 14, 2025
AI Hallucinations Create a New Software Supply Chain Threat