LLMjacking: Hackers Exploit Cloud AI Models with Stolen Credentials
May 10, 2024
New cyber attack 'LLMjacking' targets cloud-based large language model services through stolen credentials.
Attackers exploit vulnerabilities in the Laravel Framework to hijack cloud-hosted LLMs.
LLMjacking employs tools like Python scripts for key validation and reverse proxies to maintain access without exposing stolen credentials.
The aim is to monetize the LLMs while incurring substantial costs to the victim's organization, disrupting operations.
Mitigation strategies include enabling detailed logging, monitoring for unusual activity, and strong vulnerability management.
Summary based on 2 sources
Get a daily email with more Tech stories
Sources

The Hacker News • May 10, 2024
Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models
Infosecurity Magazine • May 9, 2024
New 'LLMjacking' Attack Exploits Stolen Cloud Credentials