OpenAI Rotates macOS Certificates, Urges App Updates
Per an OpenAI blog post, the company disclosed a supply-chain compromise tied to a third-party npm developer tool that executed in a macOS app-signing GitHub Actions workflow on March 31, 2026, and rotated macOS code-signing certificates. OpenAI wrote "we found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered" (OpenAI blog). OpenAI identified impacted desktop releases of ChatGPT Desktop, Codex App, Codex CLI, and Atlas and listed earliest releases signed with the rotated certificate (OpenAI). Reporting differs on the library: OpenAI's post references a compromised Axios package while other outlets reported malicious packages in the TanStack/npm ecosystem (Forbes; PCMag; AppleInsider). AppleInsider reports that Apple's Gatekeeper protections will stop trusting apps signed with the older certificates after June 12, 2026, making updates mandatory for affected macOS users (AppleInsider).
What happened
Per an OpenAI blog post dated April 10, 2026, OpenAI disclosed a software supply-chain incident in which a GitHub Actions workflow used in its macOS app-signing process downloaded and executed a malicious third-party package on March 31, 2026. OpenAI wrote, "we found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered." The company identified impacted macOS applications as ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and listed earliest releases signed with the rotated certificate (OpenAI). OpenAI said the workflow had access to a certificate and notarization material used for signing macOS applications and, out of an abundance of caution, rotated those certificates (OpenAI).
Technical details
Per OpenAI's disclosure, the workflow executed a malicious version of the third-party developer tool Axios (OpenAI). Other coverage attributed the incident to malicious packages published in the TanStack/npm ecosystem and reported that two employee devices installed malicious TanStack packages, with subsequent limited unauthorized access observed in internal source-code repositories holding signing material (PCMag; AppleInsider). Forbes and security reporting cited a GitHub account compromise and short-lived malicious Axios versions that were removed within hours (Forbes).
Investigation findings (reported)
OpenAI reported that its analysis concluded the signing certificate present in the job was "likely not successfully exfiltrated" but that the company would treat the certificate as compromised and revoke and rotate it as a precaution (OpenAI). PCMag reported that investigators detected activity consistent with credential-focused exfiltration in a limited subset of internal repositories to which the impacted employees had access (PCMag). OpenAI provided a list of affected release builds it identified as signed with the updated certificate (OpenAI).
Industry context
Editorial analysis: Supply-chain compromises in package ecosystems like npm continue to be an effective avenue for attackers to reach build and signing tooling. Companies that discover potential exposure of signing material commonly rotate certificates and push mandatory updates to limit the window in which an attacker could misuse signing keys. For practitioners, the incident reinforces the importance of restricting CI/CD job access, auditing artifact notarization paths, and monitoring for unexpected package changes in dependency trees.
Context and significance
Editorial analysis: This incident is notable because code-signing material can enable an attacker to distribute binaries that bypass OS-level trust checks such as Apple's Gatekeeper. AppleInsider reports that Apple's protections will stop trusting apps signed with the older certificates after June 12, 2026, which creates a hard deadline for macOS clients to move to re-signed binaries (AppleInsider). The operational impact is concentrated on desktop macOS users of the affected OpenAI apps; OpenAI and multiple security outlets emphasize there is no reported evidence of customer data exfiltration (OpenAI; Forbes; PCMag).
What to watch
Observers will track whether post-rotation notarization and revocation logs show any anomalous signing activity, and whether downstream projects that pull affected npm packages publish indicators of compromise. Reporting differences between sources on which npm packages were used in the malicious workflow (Axios in OpenAI's post versus TanStack reporting in other outlets) mean forensic reconciliations and supply-chain timelines remain relevant. Security teams should watch for follow-up technical writeups from OpenAI, and for any third-party advisories from package maintainers or GitHub about account compromises or malicious package uploads.
Scoring Rationale
A supply-chain compromise that touches code-signing certificates for widely used desktop AI apps is a notable security event for practitioners. The immediate operational risk is elevated by a hard deadline from platform trust controls, and the incident highlights persistent dependency-chain risks.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


