PyTorch Lightning has more than 31,100 stars on GitHub. It is the framework most machine learning researchers reach for when they want to train a model without writing the training loop themselves. On Wednesday, April 30, 2026, two new versions appeared on the Python Package Index: 2.6.2 and 2.6.3. Both contained a hidden directory named _runtime. Inside was a downloader, an obfuscated payload, and the beginning of one of the fastest-detected supply chain attacks of 2026.
Socket's automated AI scanner flagged the packages as malicious 18 minutes after they were published. Twenty-four minutes after that, the maintainers had quarantined them. The total window during which a compromised version of PyTorch Lightning was live on PyPI: 42 minutes.
That number masks a much larger story. The same threat group, TeamPCP, was simultaneously running an extension of the Mini Shai-Hulud campaign that has chewed through SAP packages on npm, Intercom's official client, and now Lightning. The attack targets exactly the credentials that data science teams keep loose: GitHub tokens, npm tokens, SSH keys, cloud environment variables, Kubernetes configs, Vault secrets, Docker credentials, and .env files. Anything imported alongside a compromised package was a target. And the malware did not rely on the package being run. It executed on import.
The 11MB Payload That Ran on Import
The malicious package included a Python script called start.py. It was small. What it did was not.
start.py downloaded the Bun JavaScript runtime onto the developer's machine and used it to execute an 11MB obfuscated payload named router_runtime.js. According to Socket's reverse-engineering team, the payload contained more than 463 references to tokens and authentication, 703 references to process and env, and 336 references to repositories. Once the file ran, it walked every common location on the machine where a credential might live. It then validated every GitHub token it found against api.github.com/user to confirm the token was active, picked the ones with write access, and used them to inject a worm-like payload into up to 50 branches across every repository the token could reach.
The poisoned commits were authored by an identity hardcoded in the payload to impersonate Anthropic's Claude Code. From a git log inspection, an infected repo would show commits that looked exactly like legitimate AI-assisted edits. That, more than the credential theft itself, is the part that makes this attack a problem for years.
A second propagation vector ran inside the npm ecosystem. The malware modified the developer's local npm packages, added a postinstall hook to package.json, bumped the patch version, and repacked the tarball. The next time the developer ran npm publish, the tampered version went live. Anyone downstream who installed it ran the same payload.
The 42-Minute Window
The chronology is what made this incident different from the LiteLLM backdoor, which lived on PyPI for weeks before anyone noticed.
The Lightning team has not confirmed the exact root cause publicly. The current working theory among the security firms tracking the incident is that an attacker gained access to credentials with publish permissions on PyPI, allowing tampered builds to be pushed straight to the registry without any change to the GitHub source repository. The tags on GitHub still match the previous clean release. The bytes on PyPI did not.
The Credentials That Were Up for Grabs
The malware harvested credentials from a comprehensive list of locations developers leave open by default. For machine learning teams, the list reads like an inventory of everything that runs a production training pipeline.
| Credential type | Where the malware looks |
|---|---|
| GitHub tokens | ~/.gitconfig, env vars, GitHub CLI cache |
| npm tokens | ~/.npmrc, env vars |
| SSH keys | ~/.ssh/ |
| AWS credentials | ~/.aws/credentials, env vars |
| Kubernetes configs | ~/.kube/config |
| HashiCorp Vault | env vars, token files |
| Docker credentials | ~/.docker/config.json |
.env files | every working directory traversed |
Exfiltration ran through zero.masscan[.]cloud:443/v1/telemetry, encrypted in transit. If that endpoint failed, the malware fell back to creating a public GitHub repository on the developer's account with the description "A Mini Shai-Hulud has Appeared," using the token it had already validated. Intercom's compromise traced back to exactly this fallback path, riding the dependency chain from pyannote-audio to Lightning to the developer's local environment to the npm tokens that finally let TeamPCP push the poisoned [email protected] to npm and intercom/[email protected] to Packagist.
That is the part of this campaign that makes it more than another supply chain headline. One compromised dependency became the entry point to two other ecosystems. The same router_runtime.js payload now runs across npm, PyPI, and Packagist. Socket's analysts called the tempo "deliberate and sustained rather than opportunistic."
The Other Side of the 42-Minute Story
The fast detection saved Lightning from becoming the next Axios npm hijack, where a North Korean group sat inside a package downloaded 100 million times a week. The speed cuts both ways. Inside that 42-minute window, every CI pipeline that pulled lightning with a wide version specifier got the malicious version. Every developer running pip install lightning to refresh got it. Every Docker build that pinned to lightning>=2.6,<3 rebuilt with the poisoned code.
Lightning AI's own post-mortem framed this as a community win. The maintainers thanked the open source contributors who reported the issue to GitHub Issue #21689 within minutes of publication. Socket's framing was different: 18 minutes is excellent, but 18 minutes is still long enough for an automated CI cluster to grab the package thousands of times. The team that decides which tokens to rotate this week is the team that ran any build during that window.
A counterpoint from the security community: the attack worked because the credentials it stole were sitting on disk to be stolen. GitHub tokens with full repo write access. npm tokens that ship straight to a personal publish queue. AWS keys persisted to disk because a developer wanted to skip the SSO refresh. None of these are required to get work done. All of them are common because the alternative is friction, and friction loses to convenience inside an ML team grinding on a model deadline.
The Practitioner Action List
If your environment ran pip install lightning between approximately 19:00 UTC on April 30 and the PyPI quarantine, treat the machine as compromised until you have rotated. The Lightning advisory, Socket's writeup, and the GitHub Issue #21689 thread on Lightning-AI/pytorch-lightning agree on the steps:
- Block lightning 2.6.2 and 2.6.3 in your dependency manager, then downgrade to 2.6.1
- Rotate every credential the development environment had access to: GitHub PATs, npm tokens, SSH keys, cloud provider keys, Vault tokens, Docker registry credentials, anything in a
.envfile - Audit GitHub commits authored by an identity impersonating Anthropic's Claude Code since April 30; the malware writes commits across up to 50 branches per repo, and the worm preserves no record of which file it overwrote
- Check transitive dependencies like
pyannote-audiothat pull Lightning, and any internal package that re-exports it - Scan for
_runtime,start.py,router_runtime.js, and Bun installs in every active virtualenv and conda env
For teams that publish their own packages on npm or PyPI, also check that no postinstall hook has been added to your package.json and that your most recent published versions match the bytes in your repository's release tags.
If any developer or CI environment in your organization installed lightning during the 42-minute window on April 30, treat every credential that environment could touch as compromised. Rotate first; investigate second. The malware does not log what it stole.
The Bottom Line
Supply chain attacks on AI packages are no longer a curiosity. They are a campaign. The same threat group that hit LiteLLM, Checkmarx, Bitwarden, Telnyx, Aqua Security Trivy, the SAP npm cluster, Intercom, and now PyTorch Lightning has been running for two solid weeks with the same tradecraft: hidden runtime directory, Bun-based payload launch, obfuscated router_runtime.js, credential harvesting, GitHub propagation. The targets keep changing. The mechanism does not.
Forty-two minutes is a remarkable response time. It is also the new floor a team has to assume an attacker can extract value within. If a CI runner pulls a dependency once a minute, 42 minutes is enough to compromise every machine in the cluster. If a junior engineer leaves a long-lived GitHub PAT in their shell history, 42 minutes is enough for it to end up in 50 branches across every repo they can write to.
As Socket's analysts put it: "After two solid weeks of virtually nonstop attacks, the tempo looks deliberate and sustained rather than opportunistic." The next compromised package will look exactly like this one. The only thing teams can change is what is sitting on the laptop when the import runs.
Sources
- PyTorch Lightning and Intercom-client Hit in Supply Chain Attacks to Steal Credentials (April 30, 2026)
- lightning PyPI Package Compromised in Supply Chain Attack (April 30, 2026)
- How the PyTorch Lightning Community Discovered a Supply Chain Attack and Fixed it in 42 Minutes (April 30, 2026)
- Possible supply chain attack on version 2.6.3, Issue #21689 (April 30, 2026)
- Shai-Hulud Themed Malware Found in the PyTorch Lightning AI Training Library (April 30, 2026)
- Popular PyTorch Lightning Package Compromised by Mini Shai-Hulud (April 30, 2026)