Skip to content

Vercel Got Breached Through an AI Tool. The AI Tool Got Breached Through Roblox Cheats.

DS
LDS Team
Let's Data Science
9 min
In February, a Context.ai employee searched for Roblox auto-farm scripts and installed Lumma Stealer. Two months later, the credentials stolen from that one laptop were used to pivot into Vercel's Google Workspace and extract environment variables from a limited set of customer projects.

Sometime in February 2026, a support employee at a small AI startup called Context.ai went looking for a cheat script. Specifically, the employee was searching for what security forensics firm Hudson Rock later described as "Roblox 'auto-farm' scripts and executors," small programs that let players automate grinding in the online game.

The executables the employee downloaded carried a second passenger: Lumma Stealer, a commodity infostealer malware that vacuums browser-stored credentials, session tokens, and OAuth cookies off an infected machine within seconds. The employee, according to the logs, had sensitive access privileges at Context.ai. Among the credentials that walked out the door that day were the keys to Context.ai's Google Workspace, its Supabase environment, its Datadog logins, and access to Authkit.

Two months later, on Sunday, April 19, 2026, at 11:04 AM Pacific, the deployment platform Vercel published an indicator of compromise. Vercel's environment variables for a limited subset of customer projects had been accessed. The starting point of the attack, Vercel confirmed the next day, was a Context.ai OAuth token that connected one of its own employees' Google Workspace accounts to Context.ai's AI Office Suite.

The story, when it was pieced together across the Vercel statement, Context.ai's security update, and forensic reporting from Hudson Rock, was almost too neat. A Roblox cheat at one company, downloaded onto one laptop, ultimately exposed production credentials at another.

The OAuth Chain That Connected Them

Vercel and Context.ai had no commercial relationship. What they had in common was a single Vercel employee.

That employee, at some point after Context.ai's AI Office Suite launched in June 2025, signed up for the tool using their enterprise Vercel Google Workspace credentials. In the sign-up flow, they granted the standard bundle of OAuth permissions that AI office-productivity tools routinely request: read and write access to Gmail, Docs, Drive, and Calendar. Context.ai described Vercel's configuration in its own April 21 security update this way: "Vercel's internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace."

Once Context.ai's own Google Workspace was compromised in February, the attackers did not need to re-exploit anything to reach Vercel. They had the keys. All they needed was the token trail from Context.ai's support systems showing which Vercel employee had linked an enterprise account, and the OAuth token that employee had handed over. From there, the attackers impersonated the Vercel employee's Google account and moved sideways into Vercel's internal systems.

Inside, they enumerated environment variables across customer projects. Environment variables in Vercel are the secrets developers ship with their code: database connection strings, API keys, third-party service credentials. Vercel encrypts variables that developers flag as "sensitive." It does not encrypt the rest.

Vercel's chief executive, Guillermo Rauch, explained the gap on X: "We do have a capability however to designate environment variables as 'non-sensitive'. Unfortunately, the attacker got further access through their enumeration." In other words, what Vercel classified as non-sensitive, the attackers decided were worth reading.

The Timeline of a Two-Month Pivot

JUNE 2025
Context.ai launches its AI Office Suite
A new consumer-grade AI productivity tool goes live, supporting Google Workspace OAuth sign-in. Enterprise employees at several companies, including at least one at Vercel, begin using it with their corporate accounts.
FEBRUARY 2026
Context.ai employee compromised by Lumma Stealer
A Context.ai employee with sensitive access searches for Roblox auto-farm scripts and executors. One of the downloaded files contains Lumma Stealer. Google Workspace credentials, Supabase keys, Datadog logins, and Authkit access are exfiltrated.
MARCH 2026
Context.ai detects unauthorized AWS access
Context.ai discovers the intrusion, engages CrowdStrike for forensics, closes its AWS environment, and deprecates the consumer AI Office Suite product.
MARCH 27, 2026
Context.ai removes its Chrome extension
The company pulls its extension from the Chrome Web Store. Outside researchers note the removal but do not yet connect it to a breach.
APRIL 19, 2026
Vercel publishes its first indicator of compromise
At 11:04 AM Pacific, Vercel posts the compromised Google OAuth application ID and confirms internal systems were accessed through a third-party AI tool. Context.ai publishes its own security statement the same day.
APRIL 20, 2026
Vercel publicly attributes the breach to Context.ai
TechCrunch confirms the breach; Vercel notifies affected customers directly and recommends immediate rotation of non-sensitive environment variables. Guillermo Rauch posts on X.
APRIL 22, 2026
Vercel releases full investigation findings
At 7:58 PM Pacific, Vercel confirms Mandiant, Microsoft, GitHub, npm, and Socket are involved. No npm packages published by Vercel were compromised. Next.js, Turbopack, and customer source code were not tampered with.

What Was Compromised, and What Was Not

Vercel's public communication has been unusually specific about scope. The company's April 2026 security bulletin states that what the attackers obtained was limited to non-sensitive environment variables across a subset of customer projects. Sensitive variables, which are encrypted at rest, showed no evidence of access.

More importantly for the open-source world, Vercel confirmed that no npm packages published by Vercel have been compromised. That matters because the larger fear, the instant the breach became public, was that a malicious package would be pushed to npm under Vercel's account and executed by millions of Next.js projects downstream. That did not happen. Next.js and Turbopack, the two projects Vercel maintains that the broader JavaScript ecosystem depends on most, were not tampered with.

The attackers also did not touch customer source code, deployments, or production builds. The blast radius, while meaningful, was narrower than the initial headlines suggested.

What did get exfiltrated, per ShinyHunters-branded posts on a cybercriminal forum, included 580 employee records containing names, Vercel email addresses, account status, and activity timestamps. A ransom of $2 million was demanded. Austin Larsen, with Google's Threat Intelligence Group, later assessed the ShinyHunters attribution as likely coming from "an imposter attempting to use an established name."

The OAuth Gap Nobody Monitors

The Vercel breach has become a reference case in a longer-running argument inside security teams: that the weakest link in a modern enterprise is not an unpatched server or a misconfigured firewall, but the accumulated OAuth grants employees quietly hand to third-party tools over months and years.

Jaime Blasco, the chief technology officer of Nudge Security, was the analyst who initially identified Context.ai as the source of the Vercel compromise. The pattern he describes is familiar to anyone who has audited a corporate Google Workspace in the last two years. An employee signs up for a productivity tool. The tool asks for broad permissions. The employee clicks accept. The OAuth grant now gives that tool, and anyone who compromises that tool, the same access as the employee.

For most companies, this shadow AI exposure is not inventoried anywhere. It does not appear in the security team's asset list. It does not trigger alerts. And it outlives the employee's own tenure, since OAuth tokens rarely expire without explicit revocation. LDS covered a related supply-chain failure mode last month, when a North Korea-linked group compromised the Axios npm package and reached a similar class of developer credentials from a different direction.

The practical question for an engineering team reading this: how many third-party AI tools have current OAuth grants into your Google Workspace, Microsoft 365, or GitHub organization right now? The honest answer at most companies is that nobody knows. That is the core structural problem the Vercel breach exposes.

For ML and data engineering teams specifically, the risk profile is sharper than for general knowledge workers. AI tools that claim deep integration with a company's data, notebooks, or pipelines are the ones most likely to request the broadest OAuth scopes, because the value proposition depends on that access. Every such tool is a potential Context.ai. The LiteLLM backdoor that LDS covered in late March was a different attack vector, a malicious package rather than an OAuth token, but the underlying principle is the same: AI infrastructure sits astride the credentials that matter most to developers.

The Counterargument That This Is Not Really an AI Problem

Not everyone agrees that Context.ai being an AI tool is the load-bearing detail. The argument against treating this as an "AI breach" runs like this: the compromise started with commodity malware (Lumma Stealer) on an employee laptop, involved standard OAuth token theft, and exploited a configuration choice (broad Google Workspace permissions) that has been a known risk for years. The fact that Context.ai happened to be an AI productivity tool is incidental. The same attack would have worked against any consumer-grade SaaS tool an employee had granted "Allow All" OAuth permissions to.

There is merit to that view. The specific attack chain here could have happened in 2022 with a calendar-integration startup. The Roblox cheat vector is a well-known Lumma Stealer distribution method that predates the current AI boom by years.

The counter-counter-argument is that AI productivity tools have changed the risk profile in two specific ways. First, they are proliferating faster than traditional SaaS because employees find them personally useful and adopt them without IT review. Second, the value proposition of AI tools almost always requires broader permissions than a comparable non-AI tool, because the AI claims to reason across your email, your docs, and your calendar simultaneously. That combination, faster proliferation and broader permissions, is what makes the category different.

Both sides of this debate agree on the practical conclusion: if your company does not have an inventory of third-party OAuth grants into its identity provider, the Vercel breach is a preview of what happens next.

The Bottom Line

The supply chain in software security used to mean packages, libraries, and build systems. The Vercel incident extends that definition. The supply chain also includes the OAuth grants your employees have already given to tools you may not know exist. A single Roblox cheat, downloaded by a single support employee at a small AI startup eight weeks earlier, produced a breach notification at one of the largest deployment platforms in the JavaScript ecosystem.

The technical remediation for affected Vercel customers is straightforward: rotate every non-sensitive environment variable, audit which variables should have been marked sensitive in the first place, and turn on multi-factor authentication everywhere it is not already enforced. Vercel has said so, and is executing it directly with affected customers.

The harder remediation is cultural. Every company now has a shadow AI footprint. The first step is looking at it.

As Guillermo Rauch described the attacker afterward: they were "highly sophisticated" and their work was "significantly accelerated by AI." The attackers, in other words, were using the same category of tool that opened the door.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths