Skip to content

Anthropic Gave Its Cyber AI to 40 Companies. OpenAI Just Gave It to Thousands.

DS
LDS Team
Let's Data Science
8 min
On April 14, OpenAI released GPT-5.4-Cyber, a fine-tuned model that lowers its refusal boundaries for security work and can reverse engineer binaries without source code. Anthropic's rival, Claude Mythos Preview, was handed to roughly 40 hand-picked institutions. OpenAI is opening the door to any verified defender who wants in.

Exactly one week after Anthropic invited a small list of Apple, Microsoft, and Amazon security teams to try Claude Mythos Preview, OpenAI published a blog post titled "Trusted access for the next era of cyber defense" and released its own answer.

The model is called GPT-5.4-Cyber. It is a version of GPT-5.4 that OpenAI describes as "cyber-permissive," meaning its refusal boundary for legitimate security research has been pulled down. It is the first OpenAI model with public support for binary reverse engineering, a capability that lets a defender point the model at a compiled executable and have it map out the program's logic, find vulnerabilities, and flag malware behavior without ever seeing the source code.

The launch is not a product release in the consumer sense. You cannot buy GPT-5.4-Cyber. You cannot even log in and ask for it. What OpenAI announced is a program, the expanded Trusted Access for Cyber initiative, and a philosophy: the company does not want to be the one deciding who is allowed to defend their own network.

The Two Labs Chose Opposite Deployment Strategies

Anthropic's Mythos Preview, introduced on April 7, was an exercise in controlled release. The company picked about 40 institutional partners, told reporters the model was too capable to hand out more widely, and argued that a tightly curated set of customers was the only responsible way to ship a system that could, in its words, find "thousands" of zero-day vulnerabilities across every major operating system and browser.

OpenAI looked at the same capability tier and reached a different conclusion. Its blog post states the position directly: "We don't think it's practical or appropriate to centrally decide who gets to defend themselves."

The Trusted Access for Cyber program, which OpenAI launched in February 2026 alongside a $10 million cybersecurity grant fund, is now scaling to "thousands of verified individual defenders and hundreds of teams responsible for defending critical software." Individual researchers verify their identity at chatgpt.com/cyber. Enterprise customers request access through an OpenAI representative. Once approved at the highest tier, they can prompt GPT-5.4-Cyber directly.

That is two orders of magnitude more people than Anthropic has granted Mythos access to.

What Binary Reverse Engineering Actually Unlocks

Most large language models refuse when asked to disassemble a binary or analyze exploit code. The refusal is not a technical limit, it is a product of the safety training applied after pretraining. Cyber-permissive variants loosen that training in narrow ways for vetted users.

Binary reverse engineering is the specific capability that matters here. A defender analyzing a suspected ransomware sample, a malware researcher studying an implant pulled off a compromised server, a vulnerability engineer fuzzing a closed-source networking appliance, all need to work backwards from compiled machine code. Until now, that work sat in tools like Ghidra, IDA Pro, and Binary Ninja, supplemented by human intuition. An LLM that can read a stripped binary and narrate its behavior compresses hours of reversing into minutes.

OpenAI says GPT-5.4-Cyber can do that. The company also says its companion tool, Codex Security, has already contributed to fixes for over 3,000 critical and high-severity vulnerabilities since entering private beta six months ago. Codex Security runs against source code; GPT-5.4-Cyber extends the same pattern matching to binaries where source is not available.

For a senior security engineer, that is the difference between auditing software you wrote and auditing software someone shipped you.

Access Is Tiered, and the Top Tier Is Narrow

OpenAI's public documentation describes a multi-level structure inside Trusted Access for Cyber. Not every approved researcher gets GPT-5.4-Cyber. The model sits behind an additional review beyond the base TAC verification, and participants currently enrolled can apply separately to the higher tier.

The practical path for a security team looks like this:

StepWhat Happens
1. Identity verificationIndividual or enterprise completes KYC at chatgpt.com/cyber or through an OpenAI rep
2. Base TAC approvalUser gains access to Codex Security and standard cyber tooling
3. Higher-tier applicationApproved users apply for expanded model access
4. GPT-5.4-Cyber unlockQualifying researchers and teams receive the cyber-permissive model

OpenAI has not published the exact criteria for tier 3. The company said it will lean on "iterative deployment with safety updates" and the program's identity verification, and that it expects the current safeguards to hold even as more powerful permissive models follow.

The Other Side: Access at Scale Is Still a Loaded Gun

The obvious objection to OpenAI's approach is that a "verified security researcher" is a broad category, and malware authors have made careers out of claiming to be one.

Anthropic's rollout reflected that concern. The company argued that Mythos was too capable to release widely, and that institutional partnerships allow it to monitor misuse in ways a KYC check cannot. Analysts at CyberScoop noted that OpenAI's scaled approach puts the burden on identity verification and usage monitoring, neither of which has a long track record at this scale for dual-use AI.

There is also the question of downstream leakage. A verified defender at a Fortune 500 bank has different threat exposure than an independent researcher working from a coffee shop laptop. Both could be approved. Both could be phished. A session token for a model that can reverse engineer arbitrary binaries is, from an attacker's point of view, worth stealing.

OpenAI's counter is that the alternative is worse. The company's stated position is that a small circle of institutional partners cannot keep pace with the actual attack surface, and that defenders outnumber attackers only if the tools reach them. "Our goal is to make these tools as widely available as possible while preventing misuse," the company wrote.

Both labs are making the same bet on capability. They disagree on who deserves the key.

What Practitioners Should Actually Do

For a security team reading this, the near-term implications are concrete:

  • If you run a SOC or a vulnerability research team, apply for Trusted Access for Cyber now. The base tier grants Codex Security, which is valuable even before the higher-tier review clears.
  • If you maintain an open-source project with security surface, the $10 million grant fund and the Codex Security track are the immediate channels. OpenAI has used the program to patch projects it depends on.
  • If you build tooling around AI for security, binary reverse engineering is now a first-party capability at a frontier lab. Products that wrapped GPT-4-class models with jailbreaks to simulate this are about to be undercut by an official, audited alternative.
  • If you work in AI safety, the deployment delta between Anthropic and OpenAI on the same capability is the most concrete natural experiment the field has had. Whichever strategy shows fewer real-world harms in the next twelve months will shape how every subsequent dual-use model gets released.

For broader context on how the cyber AI race arrived here, the LDS coverage of Anthropic's Mythos Preview and Project Glasswing and the LiteLLM supply chain attack that exposed how fragile the security scanner layer can be both frame the problem these models are trying to solve.

The Bottom Line

Two labs, one capability, two philosophies. Anthropic believes the safest way to ship a model that can find zero days is to ship it to the fewest hands possible. OpenAI believes the safest way is to ship it to the most verified hands possible. They cannot both be right, and the next twelve months of incident reports, misuse studies, and vulnerability disclosures will decide which side of that argument the industry takes seriously.

The argument matters beyond cybersecurity. Every frontier lab is about to face the same choice for models that can design proteins, write exploits, or automate social engineering. Whichever deployment strategy proves defensible on GPT-5.4-Cyber and Mythos will become the template.

OpenAI's bet, in its own words: "We don't think it's practical or appropriate to centrally decide who gets to defend themselves." The next test is whether "verified" is a high enough wall.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths