OpenAI Launches GPT-5.5 Bio Bug Bounty Program

OpenAI has launched a restricted Bio Bug Bounty for GPT-5.5 to surface universal jailbreaks that could produce harmful biological outputs. The program targets a single, reusable prompt that defeats a five-question bio-safety challenge when run from a clean chat in Codex Desktop. A top prize of $25,000 will go to the first researcher who demonstrates a complete universal jailbreak; discretionary awards are available for partial wins. Applications opened April 23, 2026 and close June 22, 2026, with testing from April 28 to July 27, 2026. Participation is invite- or application-only, requires a vetted background, and operates under NDA. The effort complements the GPT-5.5 system card evaluations and represents a proactive step to harden model safeguards against dual-use biological risks.
What happened
OpenAI has opened a restricted Bio Bug Bounty for GPT-5.5, inviting vetted AI red-teamers, cybersecurity researchers, and biosecurity experts to attempt to find a single "universal jailbreak" that bypasses the model's safeguards and answers a five-question bio-safety challenge. The program offers a top prize of $25,000 to the first successful universal jailbreak, with smaller discretionary awards for partial findings. Applications opened on April 23, 2026 and close June 22, 2026; the active testing window runs April 28 to July 27, 2026. Access is limited, participants sign NDAs, and testing is restricted to Codex Desktop.
Technical details
The core task is to craft one prompt that, from a clean chat session, circumvents moderation and guardrails to answer the five bio-safety questions. OpenAI scopes the bounty specifically to GPT-5.5 running in Codex Desktop, and it treats submissions, prompts, and outputs as confidential. The public GPT-5.5 system card documents targeted predeployment safety evaluations and a Preparedness Framework that include offline red-teaming for bio and cybersecurity risks. Key program parameters include:
- •Model in scope: GPT-5.5 in Codex Desktop only.
- •Challenge: one universal jailbreaking prompt to clear all five bio-safety questions from a clean chat without triggering moderation.
- •Rewards: $25,000 for the first true universal jailbreak; discretionary awards for partial wins.
- •Timeline: applications April 23 to June 22, 2026; testing April 28 to July 27, 2026.
- •Access and disclosure: invite/application-only, vetting required, NDA covers all materials.
Context and significance
This bounty is a defensive move addressing a rising operational risk: as foundation models gain deeper domain competence, the attack surface for malicious, dual-use biological guidance grows. By asking external experts to find a universal jailbreak rather than point vulnerabilities, OpenAI is testing for broadly reusable prompt attacks that scale beyond single-session exploits. That matters because a universal prompt can be embedded into automation, packaged into tooling, or redistributed, increasing downstream abuse risk. The selection of Codex Desktop as the test environment signals attention to tool-enabled workflows where code, tool use, and stepwise planning amplify potential harms.
Why this is different
OpenAI pairs this bounty with its GPT-5.5 system card and its Preparedness Framework, which indicates the company ran internal offline red-teaming and evaluation before release. The external bounty complements those efforts by bringing adversarial creativity from the broader security and biosecurity community, under controlled conditions. The NDA and invite model balance intelligence gathering with preventing premature disclosure of exploits or sensitive biological information.
What practitioners should note
Security teams, red-teamers, and model operators should treat this as a case study in operationalizing safe testing. Designing defenses requires thinking about reusable attack vectors, not only single-turn prompt injections. Monitoring strategies should look for patterns consistent with universal prompts, such as highly templated sequences or obfuscated instructions. For researchers, the bounty signals that access-controlled, NDA-backed vulnerability research pathways are viable models for responsible disclosure when biological risk is involved.
What to watch
Will the program find any universal jailbreaks, and if so, how will OpenAI remediate and publish learnings without revealing exploit details? Watch for follow-up changes to Codex Desktop deployment settings, moderation heuristics, or broader policy shifts informed by bounty outcomes. Also watch whether other platform operators adopt comparable invite-only bounties for domain-specific dual-use risks.
Bottom line
The GPT-5.5 Bio Bug Bounty is a targeted, high-friction approach to discover and mitigate scalable biological misuse vectors. It is meaningful for practitioners because it raises the bar on what constitutes a threat and shows how defensive programs can be structured when outputs could materially affect public safety.
Scoring Rationale
The bounty is a notable, practice-changing safety initiative from a major model provider that focuses on dual-use biological risks. It affects how security teams test and mitigate prompt-based exploits, but it is not a paradigm-shifting release or regulation, so it rates as notable rather than industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

