Mira Murati Testifies Sam Altman Misled Her

Mira Murati, OpenAI's former CTO, testified under oath that CEO Sam Altman misled her about safety clearances for a new model, according to The Verge. In a video deposition shown during the Musk v. Altman trial, Murati said Altman falsely claimed OpenAI's legal team had determined the model did not need review by the company's deployment safety board; when asked whether Altman was telling the truth she replied, "No," The Verge reports. Murati also testified she checked with Jason Kwon, now chief strategy officer, found "misalignment" between Kwon and Altman, and ensured the model went through the board, per The Verge. MIT Technology Review frames the testimony as part of a broader week-one record in the trial between Elon Musk and Sam Altman over OpenAI's restructuring.
What happened
Mira Murati, OpenAI's former CTO, testified under oath that Sam Altman misled her about whether a new model needed deployment-board review, according to The Verge. The testimony was shown as a video deposition during the Musk v. Altman trial, The Verge reports. Murati told the deposition that Altman had said OpenAI's legal department determined the model did not need review; when asked whether Altman was telling the truth, Murati answered, "No," per The Verge. Murati said she checked with Jason Kwon, found "misalignment" between what Kwon and Altman had said, and took steps to ensure the model went through the deployment safety board, The Verge reports. Murati also described managerial friction, saying her criticism "is completely management related" and that she had "an incredibly hard job" in a complex organisation, per The Verge.
Editorial analysis - technical context
Industry-pattern observations: executive disagreements about whether a model must pass formal safety review are a common flashpoint in large AI organisations as products move from research prototypes toward deployment. Such disputes typically revolve around risk tolerance, contractual obligations, and differing interpretations of internal review thresholds. For practitioners, these dynamics often create friction around release gating, incident-response playbooks, and cross-functional signoff processes.
Context and significance
Industry context
MIT Technology Review frames the testimony as part of a larger courtroom contest in which Elon Musk alleges OpenAI's founders converted a nonprofit commitment into a for-profit structure. The trial record, including high-profile depositions, is being reported as potentially consequential for OpenAI's governance narrative and public perception, per MIT Technology Review. For practitioners and vendors, prominent litigation that surfaces internal disagreements about safety procedures can increase scrutiny of release controls and compliance documentation across the sector.
What to watch
For observers: monitor subsequent courtroom filings and testimonies for corroborating documentary evidence (emails, memos, legal signoffs) that show how deployment decisions were documented. For practitioners: watch whether industry reporting spurs renewed emphasis on formal deployment boards, auditable checklists, and legal signoff practices across major labs. For investors and partners: watch whether the trial record affects governance perceptions that could influence collaborations or contractual risk assessments.
Scoring Rationale
The testimony is notable because it publicly documents executive disagreement about safety review procedures at a major AI lab, which matters to governance, compliance, and partner risk assessments. It is not a technical breakthrough, so impact sits in the 'notable' governance and business domain.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems

