OpenAI Launches $100 Pro Plan for Codex

What happened
OpenAI introduced a redesigned ChatGPT Pro subscription tier at $100/month positioned for Codex users who need more capacity than the $20/month ChatGPT Plus plan. The company also continues to offer a higher-capacity Pro $200 tier. The new $100 tier unlocks 5× higher usage limits than Plus (and temporarily 10× Codex usage through May 31), while the $200 tier offers 20× the Plus limits. Both Pro tiers include the same core capabilities: exclusive Pro models, advanced features, and unlimited access to OpenAI’s Instant and Thinking model variants.
Technical context
Codex — OpenAI’s code-generation family and agentic coding tooling — is the explicit target. The company recently shipped a Codex Mac app to support developer workflows, and cited adoption metrics: Codex now has over 3 million weekly users, a 5× increase in three months and roughly 70% month-over-month usage growth. The pricing tiers are presented as usage-quota/throughput differentiators rather than model-difference tiers; OpenAI emphasizes higher limits and continuous-run capability for heavy parallel workflows at the $200 level.
Key details
- •Pricing: Plus remains $20/month; Pro $100 and Pro $200 are the new higher-capacity options.
- •Usage: Pro $100 = 5× Plus limits (10× Codex temporarily); Pro $200 = 20× Plus limits.
- •Features: Both Pro tiers provide the same core feature set, exclusive models, and unlimited Instant/Thinking model access.
- •Adoption: Codex >3M weekly users, 5× growth in three months, 70% MoM growth.
Why practitioners should care
This is a pragmatic capacity-and-cost pivot for teams using code-generation at scale. The $100 tier creates a mid-market option for projects that exceed Plus-level throughput but don’t require the top-tier concurrency and continuous-run guarantees. For engineering managers and ML platform owners, the change affects cost modeling, quota governance, and evaluation of whether to continue consuming hosted Codex services or to invest in alternative deployment strategies. The temporary 10× Codex boost through May 31 is a useful window for workload migration or burst testing.
What to watch
Monitor quota definitions and rate-limit semantics OpenAI publishes (per-minute vs. daily vs. concurrency), enterprise/volume discounts, and whether API parity extends these Pro limits. Track whether the two Pro tiers converge in name/positioning — and how teams instrument cost and guardrails for code-generation usage.
Scoring Rationale
The launch matters to developers and ML engineering teams using code-generation: it changes cost-to-scale and offers a practical mid-tier for scaling workloads. It’s not a foundational research breakthrough, but it materially affects production cost and capacity planning.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



