OpenAI Adds Advanced Account Security to ChatGPT

The Next Web and BigGo report that OpenAI launched Advanced Account Security as an opt-in protection for ChatGPT and Codex accounts. Per reporting, the feature replaces passwords with passkeys or physical hardware security keys, requires users to register two credentials (two passkeys, two hardware keys, or one of each), and disables email and SMS account recovery, making accounts unrecoverable if both credentials and the issued recovery key are lost (The Next Web; BigGo). TheNextWeb and BigGo also report OpenAI partnered with Yubico to sell co-branded YubiKey two-packs for $68. The feature automatically opts protected accounts out of model-training data collection, and The Next Web reports it will be mandatory for participants in the Trusted Access for Cyber program starting June 1, 2026.
What happened
According to The Next Web and BigGo, OpenAI announced Advanced Account Security, an opt-in protection mode for ChatGPT and Codex accounts that eliminates traditional passwords and conventional recovery channels. Reporting states the feature requires users to register two credentials, chosen from device-stored passkeys, FIDO2-compatible hardware tokens such as YubiKey, or a combination of both. The Next Web and BigGo report that once enabled, password-based login is permanently disabled, email and SMS recovery are blocked, and OpenAI support cannot restore access if both credentials are lost; a recovery key is issued during setup and loss of that key renders the account unrecoverable. The Next Web reports a co-branded bundle of YubiKey two-packs will be available for $68, and both outlets report the feature is available to all users, including free-tier accounts. BigGo and The Next Web report that participation in OpenAI's Trusted Access for Cyber program will require Advanced Account Security starting June 1, 2026.
Technical details
The Next Web and BigGo describe the feature as using cryptographic, FIDO2-style authentication: each credential creates a unique key pair that remains on the user's device or hardware token, preventing passwords, one-time codes, or recovery email vectors from being intercepted. The Next Web notes the design trade-off is explicit, with account recovery blocked to reduce social-engineering attack surface. Reporting also says the feature shortens session durations, issues alerts for each access, and, by default for protected accounts, opts conversations out of model-training datasets.
Editorial analysis - technical context: Companies shifting sensitive workflows into conversational AI accounts increase the value of those accounts to attackers. Industry-pattern observations: adopting passkeys and FIDO2 hardware keys is a standard mitigation against credential phishing and SIM-swapping, because private keys do not transit networks and cannot be reused by attackers. For practitioners, the combination of two registered credentials plus a single non-recoverable recovery key is a classic high-assurance model used in banking and government high-security deployments; it trades recoverability for stronger protection against account-takeover.
Context and significance
Industry context
Public reporting frames this announcement as part of a broader trend where major consumer-facing AI services add enterprise-grade identity protections for high-risk users. The Next Web highlights the target audience cited in coverage as journalists, dissidents, researchers, and elected officials, while BigGo and other outlets link the change to rising phishing and credential-exposure incidents involving chat accounts. From a data-governance perspective, the automatic opt-out of model-training for protected accounts alters the platform's data collection surface for a defined user subset, as reported by The Next Web and BigGo.
Editorial analysis: For security teams and platform engineers, the practical implications include changes to account lifecycle management, support workflows, and onboarding. Industry-pattern observations: organizations that enforce hardware-backed authentication typically need documented enrollment procedures, secure offline handling of recovery material, and clear user education because lost credentials often lead to unrecoverable accounts. Observers should note that disabling support-assisted recovery reduces social-engineering vectors but increases operational friction and potential help-desk escalations outside the vendor's control.
What to watch
- •Adoption metrics among high-risk users and enterprise customers, as reported by OpenAI or independent studies.
- •Pricing and availability of FIDO2 hardware bundles beyond the initial $68 co-branded offering reported by The Next Web.
- •Any formal guidance or integration patterns for identity providers and single-sign-on flows, including federation with enterprise directory services.
- •Community and researcher feedback on how the default opt-out from model-training for protected accounts is implemented and audited.
Editorial analysis: Observers following the sector will watch whether other major AI platforms adopt comparable mandatory options for high-risk programs, and how incident response and support models evolve to accommodate unrecoverable-account risk without expanding social-engineering exposure.
Scoring Rationale
The change is a notable security development for AI platform users, especially high-risk individuals, and it has practical implications for identity management and data governance. It is not a frontier-model event, so its impact is substantial but not industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
