Judge Rules NEH ChatGPT-Assisted Terminations Unconstitutional

A federal judge ruled that abrupt terminations of National Endowment for the Humanities (NEH) grants were unlawful and unconstitutional, and criticized the agencies' use of AI in the review process. RedState reports that U.S. District Judge Colleen McMahon issued the opinion in consolidated suits challenging the April 2025 cancellations. Court documents and depositions released as part of the litigation show Department of Government Efficiency (DOGE) staff used ChatGPT with prompts such as "Does the following relate at all to DEI?" to flag grants; Inside Higher Ed and AOL report the review fed decisions that led to roughly $100 million in canceled grants and the termination of 65% of NEH staff. Plaintiffs include academic associations seeking reinstatement of funding. Editorial analysis: The ruling highlights judicial scrutiny of automated or AI-assisted government decision making.
What happened
RedState reports that U.S. District Judge Colleen McMahon issued an opinion in consolidated suits challenging the April 2025 cancellations of NEH grants. Court filings and reporting by Inside Higher Ed and AOL show plaintiffs seek relief after the administration and the Department of Government Efficiency, known as DOGE, oversaw mass terminations. Reporting attributes discovery materials and depositions that reveal DOGE staff used ChatGPT to screen grant descriptions, including a prompt that asked, "Does the following relate at all to DEI? Respond factually in less than 120 characters," per Inside Higher Ed and AOL. Those sources and RedState report that the actions led to roughly $100 million in canceled grants and the termination of 65% of NEH staff. RedState quotes the opinion as noting the court's concern about the "hallucinatory propensities of ChatGPT" in the agency review process.
Technical details
Editorial analysis: The court materials described in reporting show the AI use was limited to short-form classification prompts applied to terse spreadsheet descriptions, rather than human-led, context-rich review. Inside Higher Ed and AOL report depositions revealing that non-expert DOGE staff applied those outputs when deciding whether grants fit the administration's DEI and related executive-order criteria.
Context and significance
Editorial analysis: Courts have begun treating algorithmic tools as relevant evidence when government decisions affect constitutional rights. This decision, as reported by RedState and public-interest coverage, frames the use of a generative large language model in a high-stakes entitlement decision as legally consequential because the model's outputs were used with limited oversight and minimal documentary context. For practitioners, that pattern implies increased legal and procedural risk where automated or AI-assisted classification directly influences individual or organizational rights.
Plaintiffs and claims
RedState and Inside Higher Ed report that plaintiffs include the American Council of Learned Societies, the American Historical Association, and the Modern Language Association, among others. Those plaintiffs argue the terminations violated the First Amendment by targeting viewpoint-linked grants and violated other constitutional protections, per Inside Higher Ed.
Court language and evidentiary points
RedState reproduces the court's concern that ChatGPT outputs were generated "without any additional context beyond the cursory spreadsheet descriptions themselves," and that repeated prompts could have induced the model to supply rationales aligned with the users' perceived aims. Reporting shows depositions and a trove of emails, spreadsheets, and videoed testimony were central to the court record.
What to watch
Editorial analysis: Observers should track whether the court issues an injunction restoring specific grants or ordering remedial procedures, and whether this opinion is cited in other challenges to automated or semi-automated government decision processes. Industry observers will also monitor whether agencies tighten documentation and human-in-the-loop safeguards when using generative models for screening or classification.
For practitioners
Editorial analysis: Data scientists and policy teams working with public-sector or regulated clients should note the evidentiary importance of prompt logs, contextual metadata, and human review steps. Reporting in this case emphasizes how terse inputs and a lack of documented human oversight were central to the court's critique.
Bottom line
Editorial analysis: Multiple outlets including RedState, Inside Higher Ed, and AOL document a legal rebuke focused on the procedural use of ChatGPT in government grant reviews. The ruling increases scrutiny on how generative AI is operationalized in agency decision flows where constitutional or statutory rights are at stake.
Scoring Rationale
The story documents a federal judge criticizing the procedural use of generative AI in a government program and ties that use to large, constitutionally contested grant cancellations. That combination makes it a notable legal precedent for AI use in public-sector decision making, with direct relevance for practitioners implementing auditable AI systems.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

