In June 2025, about a dozen OpenAI employees sat with the same set of conversations open on their screens. The user on the other end of the chats was 17 years old. The conversations described scenarios involving gun violence in vivid, repetitive, escalating detail. OpenAI's automated systems had flagged the account as showing what the company's own internal language called "imminent risk of serious harm to others." The dozen staffers were debating one question: should the company tell the Royal Canadian Mounted Police?
Some said yes. Leadership said no. The account was banned. No call was made.
Eight months later, on February 10, 2026, that same user walked into Tumbler Ridge Secondary School in northern British Columbia and opened fire. Six people inside the school died: a 39-year-old educator, three 12-year-old female students, and two male students aged 12 and 13. Twenty-seven others were injured. Before driving to the school, the shooter, 18-year-old Jesse Van Rootselaar, killed Van Rootselaar's mother and 11-year-old half-brother at home. It was the deadliest school shooting in Canada since the École Polytechnique massacre of 1989.
On April 23, 2026, Sam Altman wrote the community of Tumbler Ridge a letter. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," he wrote. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered."
British Columbia Premier David Eby read the letter and posted it publicly. His verdict on Altman's apology, in his own words: "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."
What the Internal Threshold Actually Required
The phrase at the center of this story is "imminent and credible risk." Until last week, that was the bar OpenAI's internal guidelines required a flagged conversation to clear before the company would notify law enforcement. As CBC News reported, the standard required a user to discuss "the target, the means, and the timing" of a planned act of violence before any human reviewer was authorized to escalate.
The June 2025 conversations did not clear that bar. They described scenarios. They did not name a school, a date, or a rifle make. By OpenAI's own admission, internal staff who reviewed the chats concluded the conversations did not meet the threshold. The account was banned. The shooter, then 17, created a second ChatGPT account afterward. OpenAI did not discover that second account until after the February shooting, when the shooter's name became public.
According to The Globe and Mail, OpenAI met with British Columbia officials the day after the February 10 shooting and did not mention the previously flagged account. The province learned about the account from outside reporting, not from OpenAI. Provincial officials have called that omission unacceptable.
The Letter That Took Two Months to Arrive
Altman's apology was the company's first formal acknowledgment of the case at the highest level. The letter, dated April 23 and made public by Premier Eby on April 24, apologized, expressed condolences, and described changes to OpenAI's reporting policies.
"My heart remains with the victims, their families, all the members of the community, and the province of British Columbia." — Sam Altman, OpenAI CEO (Letter to Tumbler Ridge community, April 23, 2026)
What the letter did not do was commit the company to operational changes that could be independently verified. Altman did not name an audit body or public reporting mechanism. He did not provide a number for how many similar accounts the company has flagged in the past year, how many were referred to law enforcement, or how many were banned without referral.
The Policy Changes OpenAI Made After the Shooting
Ann O'Leary, OpenAI's Vice President of Global Policy, briefed Canadian federal ministers on what has changed since February. She summarized the new approach in a single line later confirmed to Global News: the account banned in June 2025, she said, "would have been referred to police" under the new policy.
The specific changes:
- The reporting threshold no longer requires a user to discuss "the target, the means, and the timing" of planned violence. A pattern of detailed scenarios involving violence is now sufficient to escalate.
- Mental health and behavioral experts now help OpenAI's trust-and-safety team assess flagged conversations.
- A direct point of contact has been established between OpenAI and the RCMP for cases where Canadian users may pose a risk.
- The lower threshold applies to all jurisdictions, not just Canada.
Each of these is a significant change. Each was adopted only after a school shooting.
How It Unfolded
Why This Is a Practitioner Story, Not a Compliance Story
The temptation when reading a case like this is to file it under "ethics" and move on. Trust-and-safety teams at every other large language model company will not have that option. They are now on notice that the threshold OpenAI used until April 2026 is the threshold a province of Canada considers grossly insufficient. The same threshold likely lives, in some form, inside the policy documents of Anthropic, Google, Meta, and every smaller LLM provider with a consumer product.
Three concrete things change for ML practitioners and platform teams:
- The "imminent threat" standard is no longer defensible as the default. Any company arguing that user content must name a target, a method, and a date before triggering escalation will face the Tumbler Ridge precedent in the next incident inquiry. Internal policy documents need to be rewritten.
- Pattern detection is a regulatory expectation, not a feature. OpenAI's new policy explicitly accepts that a pattern of detailed violent scenarios is enough to justify a police referral. Moderation pipelines need pattern-based escalation, not just keyword and threshold-based.
- Direct law enforcement channels matter. The single most operationally significant change OpenAI made was establishing a direct point of contact with the RCMP. Companies routing serious flagged content through generic tip lines should expect questions about why they do not have direct relationships with national police forces.
For context: LDS has previously covered the Utah AI chatbot prescribing psychiatric medications without a doctor and the Sam Altman home attacks. Tumbler Ridge is the first time those operational stakes have produced a body count traceable to an internal policy threshold.
The Counterargument: What "Imminent" Has Always Meant
Not everyone outside OpenAI thinks the company's June 2025 decision was indefensible at the time it was made. Until recently, the law enforcement referral bar for US tech companies was deliberately narrow, partly for user privacy and partly because broader reporting generates false positives that overwhelm police agencies. As CBC's explainer on AI safety regulation noted, "imminent and credible" is the same threshold most large platforms have used for a decade, including Meta, Google, and Discord. A 17-year-old describing violent scenarios online is, statistically, far more often processing fiction or anxiety than planning an attack. OpenAI processes hundreds of millions of conversations daily; lowering the threshold means hiring trained mental-health professionals and creating police-liaison roles in every major jurisdiction.
There is also a jurisdictional point. OpenAI is a US company; the user was Canadian. The legal architecture for cross-border tips from a US LLM provider to a Canadian police force did not exist formally before this case. Even if the dozen staffers had wanted to call the RCMP in June 2025, the channel was an ad hoc email, not a structured intake. None of that excuses the outcome. It does explain why the outcome was not surprising to people who do trust-and-safety work for a living.
The Bottom Line
Eight people are dead. The shooter was on a list inside one of the most sophisticated AI companies in the world for eight months, and the company's own staff wanted to escalate. The thing that prevented escalation was a threshold drafted at a different moment in the AI industry's history, when the question was how to protect users from over-policing rather than how to protect potential victims from a user the system already considered dangerous.
OpenAI changed its threshold this month. Anthropic, Google, Meta, and every other LLM provider will now decide whether to wait for their own Tumbler Ridge before doing the same. The companies that wait will be answering questions like Altman's in front of grieving communities. The companies that do not will be answering questions about false positives, civil liberties, and the cost of mental health professionals reading flagged conversations.
Premier Eby's verdict on Altman's apology is the line that will live longest from this case: "necessary, and yet grossly insufficient." It applies to more than the letter. It applies to the entire safety infrastructure of the AI industry circa 2026. The infrastructure is necessary. By the standard a Canadian school district just set, it is grossly insufficient.
The question every trust-and-safety lead at every LLM company is reading on Tuesday morning is the one Sam Altman did not answer: how many other accounts has your company banned without a police call, and on what threshold did you decide?
Sources
- OpenAI's Sam Altman writes apology to community of Tumbler Ridge (CBC News, April 24, 2026)
- OpenAI CEO Sam Altman "deeply sorry" for failing to alert law enforcement to Canada school shooter's ChatGPT account (CBS News, April 24, 2026)
- OpenAI had banned account of Tumbler Ridge, B.C., shooter months before tragedy (CBC News, April 2026)
- Tumbler Ridge shooter had 2nd ChatGPT account despite being banned, OpenAI says (CBC News, April 2026)
- Tumbler Ridge shooter's ChatGPT activity flagged internally 7 months before tragedy (Global News, April 2026)
- OpenAI says Tumbler Ridge shooter would be flagged to police today (Global News, April 2026)
- OpenAI says recent policy changes would have flagged Tumbler Ridge shooter's messages to police (The Globe and Mail, April 2026)
- OpenAI did not mention Tumbler Ridge shooter's posts in meeting with B.C. officials day after mass shooting: province (The Globe and Mail, April 2026)
- OpenAI CEO apologizes to Tumbler Ridge community (TechCrunch, April 25, 2026)
- OpenAI's Sam Altman apologizes to Canadian community after failing to flag mass shooter's conversations with its AI chatbot (CNN, April 24, 2026)
- When should AI companies alert police? What the Tumbler Ridge tragedy reveals about regulating AI (CBC News, April 2026)
- 2026 Tumbler Ridge shooting (Wikipedia, accessed April 28, 2026)
If you or someone you know is struggling, support is available: in Canada and the United States, call or text 988.