llm-openai-via-codex Enables OpenAI Access via Codex Subscription
llm-openai-via-codex is a small plugin that lets developers use an existing Codex subscription to call OpenAI models through the llm CLI. Published on GitHub by simonw, the package installs into an environment that already runs the llm tool and exposes Codex-backed model identifiers such as openai-codex/gpt-5.5. Basic commands include llm install llm-openai-via-codex, llm models -q openai-codex, and llm -m openai-codex/gpt-5.5 'Your prompt'. The repo includes test instructions using uv run pytest and examples for local development. This is a pragmatic integration for teams with Codex subscriptions who want to reuse that access to experiment with OpenAI endpoints from the familiar llm workflow.
What happened
llm-openai-via-codex is an open-source plugin published to GitHub that enables access to OpenAI models using an existing Codex subscription via the llm CLI. The repository, authored by simonw, exposes model names like openai-codex/gpt-5.5 so you can list and invoke Codex-provisioned OpenAI models using your Codex subscription.
Technical details
The plugin integrates into the llm tooling stack. Install and run using the same environment as llm: llm install llm-openai-via-codex. To enumerate available models use llm models -q openai-codex. To call a model, run llm -m openai-codex/gpt-5.5 'Your prompt'. The repository includes development and test instructions; run tests with uv run pytest and exercise the dev plugin with uv run llm -m openai-codex/gpt-5.5 'Talk to me in Swedish'.
Features and capabilities
- •Reuses an existing Codex subscription to surface OpenAI model endpoints under openai-codex/* identifiers
- •CLI-first experience for developers already using the llm tool, requiring minimal configuration
- •Local development and test guidance, with uv-based commands and pytest for validation
Context and significance
This is a pragmatic interoperability plugin, not a new model or major platform change. It lowers friction for teams that have Codex access but prefer to interact through the llm CLI rather than direct OpenAI SDK calls. For rapid experimentation and toolchain consolidation it can reduce the overhead of managing separate credentials and adapter code. It also reflects a broader pattern of lightweight adapters and plugins that map proprietary provider access into common developer tools.
What to watch
Adoption depends on how many teams maintain active Codex subscriptions and prefer llm as their interface. Check for updates to authentication semantics, model availability, and permission boundaries as OpenAI evolves billing and access controls.
Scoring Rationale
This is a practical tooling release that benefits developers who already use Codex and the `llm` CLI. It is not a model or infrastructure milestone, so it rates as a solid, practitioner-relevant tools update. Freshness subtraction applied for being a recent release.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

