Scoop has an Ethical Paywall
Licence needed for work use Learn More

Video | Business Headlines | Internet | Science | Scientific Ethics | Technology | Search

 

LLMjacking Expands: DeepSeek Becomes Latest Target

The rise of LLMjacking—a cyber threat targeting large language models (LLMs)—continues to gain momentum, with attackers rapidly adapting to emerging AI models. The latest victim is DeepSeek, a Chinese-developed LLM that gained widespread attention after the release of its advanced models, DeepSeek-V3 and DeepSeek-R1. According to new research from the Sysdig Threat Research Team (TRT), cybercriminals wasted no time in integrating stolen DeepSeek API credentials into their operations, leveraging unauthorized access to drive up costs and exploit AI resources.

The Growing Threat of LLMjacking

First identified by Sysdig TRT in May 2024, LLMjacking involves the theft of cloud-based AI credentials, allowing attackers to operate AI models at the expense of legitimate users. The financial burden on victims is substantial, with unauthorized usage costing cloud account owners tens of thousands of dollars in a matter of days.

Since September 2024, LLMjacking attacks have surged, prompting increased scrutiny from cybersecurity experts and legal actions. Microsoft, for instance, recently filed a lawsuit against criminals who exploited its generative AI services, including DALL-E, using stolen credentials.

DeepSeek Becomes a Prime Target

DeepSeek’s growing popularity made it an inevitable target. Just days after the release of DeepSeek-V3 in December 2024, attackers integrated the model into unauthorized OpenAI Reverse Proxy (ORP) instances hosted on platforms like Hugging Face. By January 2025, the same bad actors had already implemented DeepSeek-R1, further demonstrating how swiftly they incorporate new AI technologies into illicit operations.

Advertisement - scroll to continue reading

Sysdig TRT identified over 55 DeepSeek API keys being misused within these ORP environments. These proxy servers, which act as intermediaries for AI requests, allow cybercriminals to bypass security measures, making it difficult to trace the original source of an attack.

The Business of LLMjacking: Proxies and Underground Markets

Cybercriminals have turned LLMjacking into a lucrative business. Stolen credentials are sold via dark web marketplaces, with some ORP services offering monthly access tokens for as little as $30. One such proxy, hosted on vip[.]jewproxy[.]tech, resets its statistics periodically but has logged millions of token requests, costing compromised account owners nearly $50,000 in unauthorized usage over just four days.

The demand for stolen AI access has fueled an underground economy where cybercriminals advertise services, obfuscate proxy links, and create invite-only communities on platforms like 4chan and Discord. These communities not only share access to illicit AI services but also exchange tools and techniques to further exploit cloud resources.

Credential Theft and Verification Tools

The foundation of LLMjacking is stolen cloud credentials. Attackers acquire these through vulnerabilities in services like Laravel or by scraping exposed keys from public repositories. Once obtained, they use automated scripts to verify and categorize these credentials based on access level and spending capabilities.

Sysdig identified multiple credential-checking tools used in LLMjacking operations, including:

  • AWSGetValid.py – Verifies AWS credentials.
  • OAI dragoChecker.py – Tests OpenAI API keys and assesses access tiers.
  • AZUREdragoChecker.py – Checks Azure AI service availability.

These scripts streamline the process of identifying valuable credentials, allowing attackers to systematically exploit AI resources across multiple cloud providers, including OpenAI, AWS, Azure, and Google AI.

The Future of LLMjacking: Evolving Tactics and Defense Strategies

The rapid adaptation of attackers highlights the growing sophistication of LLMjacking operations. ORP tools are being customized with enhanced privacy features, such as stealth logging and hidden authentication layers, to evade detection. Some proxies even require users to disable CSS on their browsers to access hidden content, making it harder for cybersecurity teams to track illicit activity.

To counteract these threats, organizations must adopt stringent security practices:

  • Secure Access Keys – Avoid hardcoding credentials and use secret management tools like AWS Secrets Manager or Azure Key Vault.
  • Monitor Cloud Usage – Use anomaly detection to identify suspicious AI service activity.
  • Enforce Least Privilege Access – Limit permissions for AI API keys to minimize exposure in case of a breach.
  • Regular Key Rotation – Frequently update and replace access keys to reduce the risk of long-term exploitation.

Conclusion

As demand for AI services skyrockets, LLMjacking is evolving into a major cybersecurity concern. With new models like DeepSeek being integrated into proxy networks within days of release, organizations must remain vigilant in securing their AI resources. Failure to protect cloud credentials can lead to staggering financial losses, reinforcing the need for proactive security measures in the AI-driven digital landscape.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Business Headlines | Sci-Tech Headlines