OpenAI Backs Illinois Bill Limiting AI Liability

OpenAI publicly supports Illinois Senate Bill SB 3444, which would limit when AI developers can be sued for so-called "critical harms." The measure defines a frontier model as any system trained with more than $100 million in compute and restricts liability unless a company intentionally or recklessly caused the harm and failed to publish required safety, security, and transparency reports. The bill's "critical harm" thresholds include 100+ deaths or serious injuries, $1 billion in property damage, or aiding development of chemical/biological/radiological/nuclear weapons. OpenAI frames its support as a step toward consistent national standards and urges federal preemption; critics warn the law would raise plaintiffs' burden and could insulate large labs from accountability. The proposal intensifies an ongoing industry push to shape AI liability before federal action arrives.
What happened
OpenAI publicly testified in favor of Illinois Senate Bill SB 3444, supporting a legal framework that limits when AI developers can be held liable for extreme incidents it labels "critical harms." The bill ties coverage to a frontier model definition (any model trained with more than $100 million in compute) and shields companies from suits unless they acted intentionally or recklessly and failed to publish mandated safety, security, and transparency reports. The bill's thresholds for critical harms include 100 or more deaths/serious injuries, at least $1 billion in property damage, or assisting a bad actor to develop chemical, biological, radiological, or nuclear weapons.
Technical details
SB 3444 establishes a liability regime that conditions protection on two main requirements: publication of safety/transparency documentation and absence of intentional or reckless conduct. Key definitional and procedural elements practitioners must note:
- •Frontier model threshold: models trained with >$100,000,000 in computational costs, a metric aimed at capturing systems built by the largest labs.
- •Critical harms list: 100+ deaths/serious injuries, $1B+ property damage, or facilitation of CBRN weapons development, plus AI actions equated to human criminal offenses that cause those outcomes.
- •Reporting requirement: public safety, security, and transparency reports are prerequisites for liability protection; the bill does not prescribe a fixed reporting format or audit standard.
"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm... while still allowing this technology to get into the hands of the people and businesses, small and big, of Illinois.", Jamie Radice, OpenAI spokesperson
Context and significance
OpenAI's endorsement is part of a coordinated industry push to set liability guardrails before a major incident triggers broader criminal or civil exposure. The compute-based definition of a frontier model intentionally targets large-scale systems developed by OpenAI, Google, Anthropic, xAI, and Meta, concentrating legal protection on the major cloud/compute consumers. Supporters frame the measure as preventing a disorganized "patchwork" of state rules and prompting Congress toward a uniform federal regime; the bill itself would cease applicability if overlapping federal legislation passes. Critics, including policy experts and victims' advocates, argue the bill raises plaintiffs' proof burdens and risks insulating labs from accountability, thereby weakening market and legal incentives to design safer systems.
This move follows heightened enforcement interest: for example, recent state-level probes, and substantial industry lobbying: major AI firms have invested heavily in shaping policy at both state and federal levels, and OpenAI plans a Washington presence in 2026. The legislation's reliance on a monetary compute threshold as the marker of a frontier model is notable because it ties legal status to proprietary training budgets, which may be opaque and contested in discovery.
What to watch
Whether Illinois passes SB 3444, how courts interpret the $100 million compute yardstick and the sufficiency of public reports, and whether Congress responds with preemptive federal legislation. Track emerging litigation strategies that could challenge compute-based definitions or argue that report publication is insufficient to establish due care.
Scoring Rationale
This legislation reshapes legal risk models for AI developers and may alter incentives for safety, transparency, and engineering trade-offs. OpenAI's backing increases the bill's legislative momentum and signals coordinated industry strategy; timing and potential federal preemption make this material for practitioners and legal teams.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



