Follow America's fastest-growing news aggregator, Spreely News, and stay informed. You can find all of our articles plus information from your favorite Conservative voices. 

The federal court’s ruling striking down the mass termination of NEH grants raises two big problems: sloppy reliance on AI in agency decisionmaking and what looks like judicial activism reshaping executive priorities. Judge Colleen McMahon found that more than 1,400 grants were cut after agency reviews that leaned on ChatGPT, and she concluded those terminations violated constitutional protections. The opinion faulted the Department of Government Efficiency Service for exceeding statutory authority and for a process that was arbitrary and opaque. This case now heads toward an appeal, and it will test both how courts treat AI-assisted government actions and how far judges will go to second-guess politically driven agency decisions.

The challenge began when organizations and individuals sued over the April 2025 terminations of NEH grants, alleging the cuts targeted projects tied to certain viewpoints and identities. The administration defended the actions as efforts to follow presidential directives, eliminate grants associated with “diversity, equity, inclusion, and accessibility,” “diversity, equity, and inclusion,” “environmental justice,” and “gender ideology,” and to reduce discretionary spending in line with new priorities. Judge McMahon, however, found the process unlawful because it treated those grant terminations as viewpoint- and status-based targeting that triggers First Amendment and equal protection scrutiny. The ruling did not treat the grants as mere budgetary choices; it treated the pattern as constitutionally suspect because of the reasons and methods used to pick targets.

were lawful efforts to implement presidential directives, eliminate grants associated with “diversity, equity, inclusion, and accessibility” (“DEIA”), “diversity, equity, and inclusion” (“DEI”), “environmental justice,” and “gender ideology,” and reduce discretionary spending in accordance with the priorities of the new administration.

A striking part of the opinion focuses on how government staff used ChatGPT to sift spreadsheets and flag grants for termination. The judge described AI-generated determinations made from sparse descriptions and criticized the lack of human judgment applied to those outputs. In blunt language she warned that generative AI can hallucinate and that, without proper context and oversight, it may generate rationales that simply reflect what users seem to want to find. From a conservative perspective, the real problem isn’t AI itself but the bureaucratic shortcut: using an automated tool to rubber-stamp politically motivated cuts.

The record reflects that these ChatGPT determinations were generated without any additional context beyond the cursory spreadsheet descriptions themselves. Given what courts now know about the hallucinatory propensities of ChatGPT and similar generative-AI tools, it would hardly be surprising if ChatGPT inferred, from DOGE’s repeated requests, that [DOGE employees Justin Fox and Nate Cavanaugh] were looking for reasons why grants could be characterized as DEI – and therefore terminable – and supplied “rationales” simply in order to satisfy the user’s perceived demand. The utter lack of reasoning behind so many of its “rationales” certainly suggests as much.

The judge concluded that DOGE officials lacked lawful statutory authority to direct or influence the NEH terminations and that the decision-making process was arbitrary and infected by improper AI-assisted classifications. That combination of supposed overreach and dicey methodology led McMahon to restore the grants at the district court level. Conservatives who support the administration’s policy goals should be uneasy about this result because it empowers courts to overturn politically driven executive choices based on procedural findings and constitutional labels. At the same time, the opinion highlights a genuine governance failure: agencies must use robust human review when AI tools touch decisions that affect speech and funding.

The case will almost certainly move to the U.S. Court of Appeals for the Second Circuit, and the administration is likely to seek a stay while it appeals. Appeals courts traditionally give deference to executive policy choices, especially on budget and program priorities, but this decision was issued after discovery and detailed factual findings, which raises the bar for reversing it. For conservatives, the appeal is not just about these specific grants but about whether a district judge can effectively rewrite administration priorities by finding constitutional violations tied to process. How appellate judges weigh deference to elected officials against district-court factual findings will matter a great deal here.

Beyond this dispute, the ruling sends a broader warning to any agency experimenting with AI: courts are watching how these tools are used, especially when they touch on constitutionally protected interests. The real lesson for government is straightforward—use AI as an aid, not as the final arbiter, and ensure meaningful human oversight at every step. How much human oversight is legally required when AI influences government decisions is now a live question for the judiciary. How much do we want there to be?

Add comment

Your email address will not be published. Required fields are marked *