Fake Gemini npm Package Steals AI Tool Tokens
A malicious npm package published March 20, 2026 — gemini-ai-checker, under the account gemini-check — posed as a Google Gemini token verifier and contained a hidden payload that harvested authentication tokens, files, and credentials from developer environments. The trojanized package targeted dev workflows that integrate AI coding assistants and IDE plugins, exfiltrating secrets tied to Claude, Cursor, Windsurf, PearAI and other tools. The package’s README was copied from an unrelated library (chai-await-async), a mismatch that should have raised red flags; the payload was loaded dynamically from a Vercel-hosted endpoint. This is a supply-chain compromise aimed at developers using JavaScript tooling and AI integrations — practitioners must audit dependencies, pin trusted packages, and monitor for unexpected network calls from build-time or CLI helpers.
What happened
On March 20, 2026 a threat actor published gemini-ai-checker to the npm registry under the account gemini-check. The package masqueraded as a Google Gemini token checker but embedded a malicious, dynamically loaded payload. Analysis shows the package harvested authentication tokens, files and credentials from developer environments, with victims including users of AI coding tools and assistants such as Claude, Cursor, Windsurf and PearAI.
Technical context
This is a classic supply-chain/typosquatting-style compromise focused on the developer-to-AI-tooling surface. Attackers increasingly weaponize small, legitimate-seeming utility packages that run during developer workflows (CLI tools, build scripts, editor plugins) to access local files, environment variables and runtime tokens. Dynamic payload hosting (here, a Vercel endpoint) reduces static-analysis signals in the npm package itself and enables rapid payload changes.
Key details from sources
The package’s README was copied verbatim from an unrelated library (chai-await-async), creating a content mismatch that serves as a detectable heuristic. Code analysis and reporting indicate the malicious behavior included contacting a remote endpoint to offload stolen tokens and secrets. The campaign intentionally targeted integrations and tokens used by AI coding assistants and related services — a high-value target because those tokens can grant persistent API access.
Why practitioners should care
If your CI, local developer machines, or editor tooling installs third-party npm utilities, a trojan like gemini-ai-checker can silently leak credentials and API tokens that grant access to sensitive AI systems and data. Tokens for LLM-based assistants often carry broad access; their theft can lead to fraudulent API usage, data exfiltration, or downstream compromise of projects and customers.
What to watch and do next
Audit and pin dependencies, require lockfiles and strict ownership checks for small utility packages, review READMEs and package provenance, and scan for runtime network calls from build-time/CLI packages. Rotate tokens exposed to developer machines and enable short-lived credentials and granular scopes for AI tools. Monitor npm reports and threat feeds for additional typosquatting or dynamic-payload campaigns.
Scoring Rationale
This is a notable supply-chain attack that directly affects developer workflows and tokens for AI tools — a material risk practitioners must address. It isn’t a broad industry-defining event but is highly relevant for engineering teams using npm and AI integrations.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
