Court Rules DOGE's ChatGPT DEI Cuts Unconstitutional

U.S. District Judge Colleen McMahon ruled that the Department of Government Efficiency's cancellation of more than 1,400 National Endowment for the Humanities grants, affecting over $100 million in appropriated funds, violated the First Amendment and equal-protection principles, according to reporting by The New York Times and The Washington Post. Court filings and depositions introduced during the litigation show DOGE staffers Justin Fox and Nate Cavanaugh used ChatGPT to screen grant proposals for whether they related to "DEI" without providing a documented definition of the term, The Washington Post and Fortune report. Plaintiffs include the American Council of Learned Societies, the American Historical Association, and the Modern Language Association, per court filings and press coverage.
What happened
U.S. District Judge Colleen McMahon ruled that the Trump administration's Department of Government Efficiency (DOGE) unlawfully cancelled more than 1,400 previously approved National Endowment for the Humanities (NEH) grants and that the actions violated the First Amendment and the equal-protection component of the Fifth Amendment, according to reporting in The New York Times and The Washington Post. The ruling ordered rescission of the cuts and described the agency's actions as creating a broad "chilling effect," language quoted in The New York Times.
Court filings and depositions made public during the litigation show that two DOGE staffers, Justin Fox and Nate Cavanaugh, used ChatGPT to screen grant descriptions for ties to diversity, equity, and inclusion, but did not supply the model with a formal definition of "DEI," The Washington Post reports. Reporting by Fortune and court exhibits documents a specific prompt used by the staffers and a recorded ChatGPT response that flagged a $349,000 museum HVAC grant as "#DEI," leading to termination of that award.
Technical details
Editorial analysis - technical context: Public reporting indicates the process relied on short, instruction-style prompts to ChatGPT that asked for a binary DEI judgment in under 120 characters, according to Fortune's account of court exhibits. The record presented in court includes the exact prompt language and the model's terse responses, which were then used as the basis for mass terminations. The filings show the AI interaction was one element in a human-led process; depositions described how staffers incorporated the model outputs into selection and termination decisions, per The Washington Post.
Industry-pattern observations: Relying on generative models for classification without documented definitions, auditing, or human-in-the-loop protocols increases legal and operational risk in regulated contexts. Models like ChatGPT are sensitive to prompt wording and can produce concise labels that lack provenance or rationale; journalists and court filings in this case used those properties as central evidence of an arbitrary process.
Context and significance
Editorial analysis: The ruling frames judicial scrutiny of automated or AI-assisted decision-making in the public sector, especially where constitutional speech and equal-protection concerns arise, as material and enforceable. The plaintiffs in the case include the American Council of Learned Societies, the American Historical Association, and the Modern Language Association, per court filings and news coverage, and the suit argued that mass cancellations were viewpoint-based and arbitrary. The decision addresses both the substance of the cuts and the process by which they were executed, with the court characterizing the conduct as discriminatory in viewpoint and harmful to ongoing scholarship, according to The New York Times.
For practitioners: Organizations building or deploying LLM-driven classification or screening systems for public-facing or regulatory decisions should take note that courts may evaluate not only outcomes but also process transparency, definitional clarity, and recordkeeping. The courtroom record in this case centered on the prompt used, the lack of a documented definition of DEI, and the downstream consequences for funded projects, as reported by The Washington Post and Fortune.
What to watch
Industry context
Observers should track whether the court's order spawns further litigation over government use of foundation models, and whether agencies update internal guidance about AI use, audit trails, or human review standards. Watch for follow-on filings from the plaintiffs' counsel and administrative responses from NEH or DOGE in regulatory or congressional forums, as reported updates will clarify whether the rescission is implemented and how agencies will document AI-assisted decisions. Also monitor whether other cases cite this decision when contesting algorithmic or AI-influenced administrative actions.
Scoring Rationale
The ruling directly affects how generative models can be used in government decision-making and establishes a legal precedent practitioners must consider. It is important for teams building AI-driven policy, compliance, or funding workflows, but it is not a frontier-model release.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


