OpenAI Codex reaches 90M installs in one week

CryptoBriefing reports that OpenAI Codex recorded 90 million installs in a single week, a weekly-download figure rather than a lifetime total. The outlet attributes the surge to the rollout of GPT-5.5, which it says expanded Codex's context capabilities to a 400K token window for the tool and API support up to 1,000,000 tokens, and reduced token usage by roughly 40% versus the prior model. CryptoBriefing also reports API pricing of $5 per million input tokens and $30 per million output tokens, and cites performance gains described as a 35x lower cost per million tokens on NVIDIA systems. The article notes Codex surpassed Anthropic's Claude code in weekly momentum during the period covered.
What happened
CryptoBriefing reports that OpenAI Codex achieved 90 million installs in a single week, and frames that figure as weekly downloads rather than cumulative installs. The same report ties the surge to the rollout of GPT-5.5, which it says gave Codex a 400K token context window for the tool and API support for contexts up to 1,000,000 tokens. CryptoBriefing also reports that GPT-5.5 uses approximately 40% fewer tokens per task than the previous model and that API pricing for the new model is $5 per million input tokens and $30 per million output tokens.
Technical details
CryptoBriefing reports that GPT-5.5 improvements produced what the article describes as a 35x lower cost per million tokens on NVIDIA systems, and that more than 10,000 NVIDIA employees are reportedly experiencing faster token generation and improved coding efficiency. The piece emphasizes the practical significance of larger context windows, arguing that a 1,000,000 token API window reduces the need for aggressive code chunking in long-lived software projects.
Industry context
Editorial analysis: Rapid weekly uptake at this scale, combined with materially larger context windows and reported token-efficiency gains, shifts the economics of developer-facing AI. Companies and projects that rely on AI-assisted coding face different cost and workflow tradeoffs when models can process substantially more code in a single context and consume fewer tokens per task. Public reporting also frames this moment as a competitive win for Codex relative to Anthropic's Claude code, given the reported overtake in weekly momentum.
What to watch
Editorial analysis: Observers should watch for independent benchmarks that confirm the reported 40% token-efficiency gain and the claimed 35x cost improvement on GPU platforms, since those metrics materially affect adoption and operating costs. Also track whether third-party developer communities actually adopt longer-context workflows enabled by GPT-5.5, and whether competing vendors release comparable large-context or token-efficiency claims. Finally, monitor how API pricing at $5/$30 per million tokens influences self-hosting versus managed API decisions among engineering teams.
Scoring Rationale
The reported one-week spike and the combination of much larger context windows plus claimed token-efficiency and pricing changes are notable for engineering teams and platform architects. Independent verification of the performance and cost claims will determine how broadly this shifts tool selection and deployment economics.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


