Skip to content

The Axios Hack Reached OpenAI's macOS Signing Pipeline. Every Old App Expires May 8.

DS
LDS Team
Let's Data Science
8 min
On March 31, North Korean operators pushed a poisoned version of the most popular JavaScript HTTP library to npm. Eleven days later, OpenAI disclosed that its macOS code-signing pipeline had pulled the malicious package and run it with access to the certificate that signs ChatGPT Desktop, Codex, Codex CLI, and Atlas. The company is treating the certificate as compromised. Every user on an older app build loses support on May 8.

The malicious version of Axios spent three hours and eight minutes on the npm registry on March 31. In that window, a GitHub Actions job inside OpenAI ran npm install, pulled Axios 1.14.1 because a floating version tag told it to, and executed the package's postinstall hook. The job was not a test runner or a build for a low-trust service. It was the workflow that signs OpenAI's macOS applications.

On April 11, OpenAI published a notice titled "Our response to the Axios developer tool compromise." The post confirmed the specific CI job that touched the poisoned package had access to the Apple Developer ID certificate and the notarization material used to sign ChatGPT Desktop, Codex, Codex CLI, and Atlas. The company's analysis concluded the certificate probably was not exfiltrated. It is rotating the certificate anyway.

For users, the practical effect is a hard deadline. Starting May 8, 2026, macOS will stop honoring the revoked certificate. Any OpenAI desktop app installed from a build signed with the old key will stop receiving updates and will eventually fail to launch cleanly on current macOS releases.

The Attack Path Was a Floating Version Tag

OpenAI's post-incident writeup pins the root cause on a single configuration error. In its own words: "The root cause of this incident was a misconfiguration in the GitHub Actions workflow, which we have addressed. Specifically, the action in question used a floating tag, as opposed to a specific commit hash, and did not have a configured minimumReleaseAge for new packages."

Translated: the workflow asked npm for the current version of Axios instead of pinning a known-good commit, and it did not wait any amount of time before trusting a newly published release. When North Korea's UNC1069 group pushed the poisoned axios@1.14.1 at 12:21 AM UTC on March 31, the next scheduled OpenAI build grabbed it.

That single missing guardrail is the one every security team is re-auditing right now. The LDS breakdown of the original Axios attack and how a three-hour publishing window hit 3% of all cloud environments details the payload. The same SILKBELL dropper and the WAVESHAPER.V2 backdoor that landed on developer laptops also landed inside OpenAI's signing job.

What a Compromised Signing Certificate Actually Means

Code-signing certificates are the trust anchor macOS uses to decide whether a binary is genuinely from OpenAI or a convincing fake. Gatekeeper checks the signature before an app runs. The Apple notarization service checks it before a build can be distributed at all.

If the certificate had been exfiltrated, an attacker could have produced a malicious binary, signed it with OpenAI's own key, and shipped it with Gatekeeper approval. Users opening "ChatGPT Desktop" from a malicious website would see no warning. The most damaging possible outcome of a supply-chain attack, as Socket's research team put it, is the distribution of trusted, signed malware at scale.

OpenAI's investigation concluded that outcome probably did not happen. The certificate was injected into the CI job in a way that the malicious payload, executing in the same workflow, could not reach before the job completed. The company hedged anyway. The revocation plus rotation is the safe move even when the forensic evidence looks clean.

The Apps and Versions Users Must Move To

Every OpenAI macOS desktop product was re-signed with the rotated certificate. Users who installed any of these apps before the disclosure need to be on at least these builds by May 8:

ApplicationMinimum Safe Version
ChatGPT Desktop1.2026.071
Codex App26.406.40811
Codex CLI0.119.0
Atlas1.2026.84.2

The in-app updater pulls the new signed versions automatically for users who keep auto-update enabled. For managed fleets, the same builds are available through the standard distribution channels. OpenAI is not publishing new features alongside the rotation; these releases exist purely to migrate users off the revoked certificate.

After May 8, the old builds will not self-update. They will still open on current macOS versions, but any security update OpenAI ships afterward will fail to install cleanly, and a future macOS release could refuse to launch them entirely. The company advised organizations to audit their device fleets for outdated OpenAI apps and push the new builds through MDM before the deadline.

North Korea Targeted the Software Supply Chain, Not Crypto

UNC1069 is the group Google Threat Intelligence Group has tracked since at least 2018. Microsoft tracks the same cluster as Sapphire Sleet. Other vendors know it as BlueNoroff, Stardust Chollima, and CryptoCore. The group is attributed to the North Korean government and has historically focused on cryptocurrency theft.

The Axios operation is a different shape. Poisoning the most popular JavaScript HTTP client, waiting for CI pipelines to eat the payload, and hunting for code-signing material is an infrastructure attack, not a wallet grab. Security analysts at Hackread noted the group appears to have expanded its targeting to "high-value signing keys and credentials that are usually unreachable through direct attacks."

OpenAI's CI pipeline was exactly that kind of target. So were the pipelines at every other company that ran a vulnerable build in the three-hour window. Wiz's scans found the poisoned Axios in roughly 3% of cloud environments it inspected. OpenAI is the most visible name to disclose a direct impact. It is almost certainly not the only one.

The Counterargument: This Is a Gatekeeper Win, Not a Failure

A fair reading of the incident is that the defenses worked. The malicious package was detected within six minutes by Socket's automated scanner, removed from npm within three hours, and the specific way OpenAI's CI job sequences certificate injection appears to have prevented exfiltration. No user data was accessed. No malicious binary was signed. The broken window closed before anyone could climb through it.

OpenAI's decision to revoke anyway is the opposite of a cover-up. It is also the opposite of the response many companies default to, which is silence until a disclosure is forced. The public writeup names the exact misconfiguration, publishes the minimum safe versions, and gives a firm deadline. Security researchers at Socket and ReversingLabs both called the response textbook.

The uncomfortable part is what the near miss implies. If the job sequencing had been slightly different, or the payload had been designed to wait for certificate injection instead of executing immediately, the outcome changes from "precautionary rotation" to "signed North Korean malware on every Mac that auto-updated ChatGPT." The defense worked this time because of how a GitHub Actions job happened to be ordered. That is not a safety margin anyone should rely on twice.

What Every Team Building on npm Needs to Do This Week

The specific lessons OpenAI wrote down apply to every organization that ships signed software:

  • Pin every dependency to a commit hash in CI, not a floating tag. The tag moves when a maintainer is compromised. The commit does not.
  • Configure minimumReleaseAge on package installers. A 24-hour buffer on new releases would have caught the Axios payload before any CI job pulled it.
  • Isolate code-signing material from general-purpose build steps. Inject certificates at the last possible moment, in a job that runs no third-party code.
  • Audit macOS app versions on managed fleets right now. If you deploy ChatGPT Desktop, Codex, Codex CLI, or Atlas through MDM, you have until May 8.

For broader context on how fragile the AI infrastructure supply chain has become, the LiteLLM backdoor that handed attackers 95 million monthly downloads is the reference case. Both incidents end the same way: a single compromised package, a single CI job, global blast radius.

The Bottom Line

OpenAI found a North Korean backdoor inside the job that signs its Mac apps, concluded the certificate probably survived, and revoked it anyway. Every user of ChatGPT Desktop, Codex, Codex CLI, or Atlas on macOS has until May 8 to move to a build signed with the new key. The company will not issue the warning twice.

The broader signal is the one security teams should not need a second incident to internalize. The group that ran this operation did not want OpenAI's user data. It wanted OpenAI's signing key, because a signing key is a way to deliver malware that Gatekeeper treats as trustworthy. Every frontier lab, every major developer tool, every CI/CD pipeline that pulls from npm on a floating tag is on the same target list.

The window closed in three hours on March 31. The next one will be shorter.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths