Greg Brockman Recounts 72 Hours That Nearly Killed OpenAI

Greg Brockman, co-founder and president of OpenAI, gives a first-hand account of the internal crisis that followed Sam Altman's firing and the emergency actions that almost ended the organization. Brockman describes the 72 hours of board calls, his resignation, the creation of a contingency entity dubbed "Phoenix" at Altman's house, and the decisive moment triggered by Ilya Sutskever's public reaction. He traces OpenAI's strategic shift from a pure nonprofit to a compute-constrained, hybrid model, recalls the Napa offsite that set a decade-long three-step technical plan, and addresses product trade-offs such as removing visible reasoning traces from ChatGPT. The conversation frames governance fragility, access to compute, and operational resilience as central determinants of who will build and control future AGI.
What happened
Greg Brockman, co-founder and president of OpenAI, walked through the internal collapse that unfolded in the 72 hours after the board fired Sam Altman. He describes receiving the board call, resigning the same day, and helping design a contingency entity named "Phoenix" the next morning at Altman's house. Ilya Sutskever's public response then recalibrated events and forced a rapid organizational reset.
Technical details
Brockman credits the original Napa offsite with producing the three-step technical plan OpenAI has followed for a decade, and explains why the organization abandoned a pure nonprofit structure. He confirms that an increasing fraction of OpenAI's codebase is generated by AI, saying it is "hard to know what percent is not". He also explains the reasoning behind product choices such as stopping visible reasoning traces in ChatGPT.
- •Breakthrough moments at OpenAI and the origins of the three-step plan
- •The governance shock around Sam Altman's firing and Brockman's immediate resignation
- •The ad hoc formation of "Phoenix" as a continuity plan
- •The role of Ilya Sutskever's tweet in determining outcomes
- •Strategic trade-offs like hiding reasoning traces and compute-driven access to AGI
Context and significance
This narrative exposes how governance, board dynamics, and compute economics can become single points of failure for leading AI labs. The episode links company survival to rapid contingency planning, hybrid corporate structures, and product decisions driven by safety and competitive pressure. For practitioners, the episode highlights how operational design choices-who controls compute, how models are deployed, what transparency is sacrificed for safety-shape model capabilities and the broader trajectory toward AGI.
What to watch
Governance reform at major AI labs, industry standards for contingency planning, and how compute allocation and product transparency evolve as core levers in the race for AGI.
Scoring Rationale
First-hand revelations about an organizational near-collapse at OpenAI matter for everyone building or depending on frontier models. The episode illuminates governance, continuity planning, and compute centralization risks that directly affect model development and deployment decisions.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



