LLMjacking: The $100K-Per-Day Attack Draining Enterprise AI Budgets
LLMjacking is a rapidly growing attack where hackers steal cloud credentials to abuse LLM APIs — racking up over $100,000/day in charges. Learn how Operation Bizarre Bazaar exposed a criminal supply chain and what you can do to protect your AI infrastructure.
LLMjacking — the unauthorized use of large language models through stolen cloud credentials and API keys — has rapidly become one of the most financially devastating cloud attacks of 2025–2026. Think of it as cryptojacking's far more expensive cousin: instead of stealing computing power for cryptocurrency mining, attackers steal access to models like Claude Opus, GPT-4, and DeepSeek, running up bills that can exceed $100,000 per day while victims foot the bill. First identified by the Sysdig Threat Research Team (TRT), LLMjacking has since evolved from isolated incidents into an organized criminal industry.
Key Stat
In early 2026, Operation Bizarre Bazaar recorded 35,000+ attack sessions in just 40 days, with daily costs to victims exceeding $100,000 when flagship models like Claude Opus are targeted. With models like Claude Opus 4.6 now priced at $25 per million output tokens, the cost ceiling for victims has only increased.
How LLMjacking Works
LLMjacking follows a multi-stage attack chain that exploits stolen cloud credentials to gain free access to expensive AI services:
Stage 1: Credential Theft
Attackers obtain cloud credentials through a variety of methods. Common vectors include:
- Exposed
.envfiles and hardcoded API keys in public repositories - Compromised CI/CD pipelines leaking cloud secrets
- Phishing attacks targeting cloud administrator accounts
- Exploiting misconfigured cloud storage buckets
- Scanning for unauthenticated AI endpoints (Ollama on port 11434, OpenAI-compatible APIs on port 8000)
Stage 2: Validation and Enumeration
Once credentials are obtained, attackers run automated scripts to verify access across 10+ LLM services simultaneously, including:
- AWS Bedrock (Anthropic Claude, Amazon Titan)
- Azure OpenAI Service
- Google Cloud Vertex AI
- OpenAI API directly
- Anthropic API directly
- Mistral, AI21 Labs, OpenRouter, and ElevenLabs
Critically, Sysdig researchers found that these scripts also query logging settings — attackers actively check whether CloudTrail or equivalent logging is enabled, attempting to operate undetected.
Stage 3: Reverse Proxy Deployment
Stolen credentials are fed into OpenAI Reverse Proxies (ORPs) — intermediary servers that mask unauthorized access. These ORPs serve as storefronts where attackers resell access to the stolen LLM services. Sysdig discovered ORPs containing dozens of stolen API keys from multiple providers, with total token usage across observed proxies exceeding two billion tokens.
Stage 4: Monetization
A black market has emerged where attackers sell stolen LLM access through Discord and Telegram communities for as little as $30 per month. Buyers include individuals banned from LLM services and entities in sanctioned countries seeking to evade restrictions. Sysdig's analysis shows that 80% of prompts through compromised proxies are in English, with Korean (10%) and Russian usage also significant.
Attack Model Preferences
Attackers don't just use whatever's available — they actively enable the most expensive models. Researchers have documented cases where attackers used compromised AWS credentials to enable models that were previously disabled on the victim's account, specifically targeting high-cost flagship models. Disabled models in cloud environments should never be considered secure.
Operation Bizarre Bazaar: The First Large-Scale LLMjacking Campaign
Between December 2025 and January 2026, Pillar Security Research uncovered Operation Bizarre Bazaar — the first attributed, large-scale LLMjacking campaign with a fully commercial monetization pipeline. Their honeypots captured 35,000+ attack sessions over 40 days, averaging 972 attacks per day against exposed AI infrastructure.
The Criminal Supply Chain
What made Bizarre Bazaar notable was its three-tier criminal structure:
- Scanner Infrastructure: Automated systems using Shodan and Censys to systematically discover exposed LLM endpoints and MCP (Model Context Protocol) servers
- Validation Team: Infrastructure linked to silver.inc tested discovered endpoints to verify access to usable models
- Commercial Marketplace: silver.inc resold unauthorized access to 30+ LLM providers at discounted rates via Discord and Telegram, accepting cryptocurrency and PayPal
The threat actor, identified as "Hecker" (also known as Sakuya and LiveGamer101), operated bulletproof hosting infrastructure in the Netherlands. Attacks typically began within hours of new endpoints appearing in internet scans.
Common Targets
The operation exploited common misconfigurations including:
- Unauthenticated Ollama endpoints on port 11434
- OpenAI-compatible APIs running on port 8000 without auth
- MCP servers without access controls
- Production chatbots deployed without authentication layers
The DeepSeek Targeting: Speed of Exploitation
The speed at which LLMjackers incorporate new models is alarming. When DeepSeek-R1 launched in January 2025, it was being exploited through stolen API keys the very next day. The pattern has continued throughout 2025 and into 2026 — every major model release is targeted almost immediately.
Researchers discovered over a dozen proxy servers populated with stolen DeepSeek API keys alongside credentials from OpenAI, AWS, and Azure — suggesting attackers maintain portfolios of compromised credentials across all major providers. With the release of models like Claude Opus 4.6 and GPT-5, the incentive for attackers has only grown as these premium models command higher API prices.
DeepSeek's Own Security Incident
In a separate incident in January 2025, Wiz security researchers discovered that DeepSeek itself had left a ClickHouse database publicly accessible — exposing over one million log entries containing plaintext chat histories, API keys, secret tokens, and backend operational details. The database was accessible without authentication on two exposed hosts, allowing anyone to execute arbitrary SQL queries. DeepSeek secured the database after Wiz's disclosure.
The Real Cost: By the Numbers
The financial impact of LLMjacking is staggering and varies by the models being abused:
| Metric | Details |
|---|---|
| Daily cost (flagship models) | $100,000+ per day (Sysdig research) |
| Single ORP instance cost | ~$50,000 in just 4.5 days of operation |
| Total tokens observed | 2+ billion across monitored ORPs |
| Bizarre Bazaar attack sessions | 35,000+ in 40 days (avg. 972/day) |
| Time to exploit new model releases | Within 24 hours (DeepSeek-R1, Jan 2025) |
| LLM providers targeted | 30+ (Operation Bizarre Bazaar, Jan 2026) |
| Black market price for stolen access | ~$30/month per user |
Why LLMjacking Is Uniquely Dangerous
LLMjacking differs from traditional cloud abuse in several critical ways:
1. Immediate, Massive Financial Impact
Unlike cryptojacking where costs accumulate gradually, LLM API calls are expensive by nature. A single compromised account running a flagship model like Claude Opus or GPT-5 generates six-figure bills within 24 hours — often before organizations notice the anomaly.
2. No Resource Footprint on Victim Systems
Traditional attacks like cryptojacking cause noticeable CPU/GPU spikes. LLMjacking happens entirely through API calls — there's no malware to detect, no unusual process activity, and no resource consumption on your infrastructure. The attack is invisible except in billing dashboards.
3. Sanctions Evasion and Legal Liability
Sysdig found that a significant portion of LLMjacking is motivated by sanctions evasion — entities in countries like Russia using stolen U.S. cloud credentials to access LLMs they're legally prohibited from using. This creates potential compliance and legal liability for the credential owner, beyond just the financial loss.
4. Rapid Model Adoption by Attackers
As the DeepSeek incidents show, attackers incorporate new models within hours of release. Researchers have also observed that newer cloud APIs can be abused shortly after launch — and some API calls don't automatically appear in CloudTrail or equivalent audit logs, creating dangerous blind spots for defenders.
How to Defend Against LLMjacking
Protecting your AI infrastructure from LLMjacking requires a layered approach:
1. Credential Security
- Never hardcode API keys in source code or configuration files — use secret management services (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault)
- Rotate credentials regularly and immediately revoke any potentially compromised keys
- Use short-lived credentials (STS temporary credentials, service account tokens) instead of long-lived API keys
- Scan repositories for accidentally committed secrets using tools like git-secrets, TruffleHog, or GitHub's secret scanning
2. Access Controls and IAM
- Apply least-privilege principles — only enable the specific LLM models your application needs
- Don't assume disabled models are secure — implement IAM policies that explicitly deny access to unused models
- Use granular IAM roles with separate permissions for model invocation, model management, and logging configuration
- Implement IP allowlisting for API access where possible
3. Monitoring and Alerting
- Enable comprehensive logging — ensure CloudTrail (AWS), Azure Monitor, or GCP Cloud Audit Logs capture all LLM API invocations
- Set billing alerts with aggressive thresholds — a sudden spike in LLM API costs is the primary indicator of LLMjacking
- Monitor for unusual access patterns — new IP addresses, off-hours usage, unusual model selections, or geographic anomalies
- Alert on logging configuration changes — attackers actively attempt to disable monitoring before launching abuse
- Track model enablement changes — any unexpected model activation should trigger an immediate investigation
4. Network and Endpoint Security
- Never expose LLM endpoints directly to the internet — always place them behind authenticated API gateways
- If running local models (Ollama, vLLM), bind to localhost only and require authentication for any network-accessible endpoints
- Audit your MCP servers for authentication — Operation Bizarre Bazaar specifically targeted unauthenticated MCP endpoints
- Use VPC endpoints for cloud LLM services to avoid exposing traffic to the public internet
5. Incident Response Planning
- Have a credential revocation runbook ready — speed matters when bills accumulate at $100K/day
- Set hard spending limits on cloud AI services where available
- Maintain an inventory of all LLM API keys and cloud credentials with clear ownership
- Practice response drills — the faster you detect and respond, the less financial damage
Microsoft Takes Legal Action
The threat has become serious enough that Microsoft filed a lawsuit against cybercriminals who stole credentials to abuse DALL-E and other GenAI services — signaling that major providers are treating LLMjacking as organized crime rather than mere nuisance abuse.
AI Security Platform
Comprehensive AI security requires protection at every layer. While LLMjacking targets infrastructure, promptguardrails complements your defenses by securing the application layer.
The Bottom Line
LLMjacking represents a fundamental shift in cloud security threats. The combination of high per-request costs, invisible attack footprints, and a rapidly maturing criminal marketplace makes it one of the most financially dangerous cloud attacks today.
The discovery of Operation Bizarre Bazaar in early 2026 proves this isn't a theoretical risk — it's an active, organized, and commercial criminal operation. Every organization deploying LLMs needs to treat credential security, endpoint authentication, and usage monitoring as top priorities.
As Sysdig's research demonstrated: attackers will find your exposed endpoints, they will test your credentials across every provider, and they will start billing you — often before you even know they're there.
Sources and Further Reading
- Sysdig TRT: LLMjacking: Stolen Cloud Credentials Used in New AI Attack
- Sysdig TRT: The Growing Dangers of LLMjacking
- Sysdig TRT: LLMjacking Targets DeepSeek (2025)
- Pillar Security: Operation Bizarre Bazaar (Jan 2026)
- Dark Reading: LLM Hijackers Quickly Incorporate DeepSeek API Keys
- Wiz: DeepSeek Database Exposure (Jan 2025)