Judge Finds DOGE Used ChatGPT to Cancel Grants

Reuters and other outlets report that US District Judge Colleen McMahon ruled the Department of Government Efficiency (DOGE) unlawfully terminated more than 1,400 National Endowment for the Humanities grants, representing over $100 million in appropriated funds (Reuters). Court filings and reporting in The New York Times, The Verge, and PCMag show DOGE staff submitted short grant descriptions to ChatGPT with prompts such as, "Does the following relate at all to D.E.I.? Respond factually in less than 120 characters," and used the model's brief answers to flag projects for cancellation (NYT; PCMag; The Verge). The filings include testimony from DOGE staffer Justin Fox that he used ChatGPT to "pull out anything related to DEI" (court filing cited by The Verge and NYT). The judge found the terminations amounted to viewpoint discrimination and therefore violated the First and Fifth Amendments (Reuters).
What happened
US District Judge Colleen McMahon ruled that the Department of Government Efficiency's cancellations of previously approved National Endowment for the Humanities grants were unlawful and discriminatory, overturning the termination of more than 1,400 grants worth over $100 million, according to Reuters. Court filings and reporting by The New York Times, The Verge, and PCMag document that DOGE employees submitted brief grant descriptions to ChatGPT as part of a screening process. The prompt reported in filings was, "Does the following relate at all to D.E.I.? Respond factually in less than 120 characters," and the chatbot responses were used to mark grants for termination (NYT; PCMag; The Verge).
Technical details
Editorial analysis: The public record reported in The New York Times and other outlets shows the ChatGPT queries were supplied with minimal context, often only the one- or two-line project summaries pulled into a spreadsheet. Reporting notes that DOGE staff did not provide a definition of "DEI" to the model and did not instruct ChatGPT to evaluate "the purpose, methodology, or scholarly substance of a project" (PCMag; The Verge; NYT). The court filing cited by Reuters and in the publicly available memo opinion adds that many model-generated determinations lacked reasoning beyond the sparse descriptions provided.
Court findings and evidence
Per the court filings summarized by Reuters and quoted in The Guardian, the judge found the terminations reflected "blatant" viewpoint discrimination and violated the First and Fifth Amendments. The filings include testimony from Justin Fox that he used ChatGPT "to highlight why [a] grant may relate to DEI" and to "pull out anything related to DEI," and they show examples where projects focused on industrial policy, archival work, or historical documentation were flagged as DEI-related (The Verge; NYT; PCMag). The filing also records the judge noting the possibility that ChatGPT outputs were simply shaped by the inputs and prompts supplied by DOGE staff.
Industry context
Editorial analysis: The episode illustrates how lightweight prompt engineering and reliance on model outputs without domain definitions or human-in-the-loop review can lead to brittle classifications with downstream legal exposure. Observers familiar with public-sector deployments note that government use cases amplify risk when decisions affect constitutionally protected characteristics or statutory entitlements.
Implications for practitioners
For practitioners: Organizations using generative models for high-stakes screening or eligibility decisions should view this ruling as a cautionary example. The record shows problems that often arise when models are asked to classify short, context-poor text with single-label prompts: label noise, spurious correlations with protected characteristics, and inadequate traceability of reasoning. Data scientists and engineers should expect legal scrutiny when automated or semi-automated processes intersect with civil rights, funding allocations, or other rights-protecting statutes.
What to watch
Editorial analysis: Observers will likely track whether this ruling is cited in future challenges to automated decision making by government agencies. Practitioners should watch for follow-on guidance or policies from federal oversight bodies on the acceptable use of large language models in adjudicative or allocation decisions. Also monitor whether agencies publish implementation details or audit trails that show how models and human review interacted in decisions.
Bottom line
Per Reuters, The New York Times, The Verge, and PCMag, the court record ties ChatGPT outputs directly to a tranche of grant terminations, and the judge concluded the process produced unconstitutional viewpoint discrimination. The legal outcome underscores the operational and legal hazards of delegating high-stakes classification tasks to generative models without clear definitions, robust human review, and documented reasoning.
Scoring Rationale
This ruling is notable for its legal consequences around government use of generative AI in rights-adjacent decisions. It matters to practitioners because it exemplifies operational risks when LLM outputs feed high-stakes administrative actions. The story is recent and uses multiple primary sources, so it rates as a notable, policy-relevant item.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


