The Senate has formally approved the use of specific AI chatbots for staff work while leaving a major player off the roster, and that omission is colliding with national security moves, legal action, and differing policies across the Capitol.
The Senate guidance now names three approved systems that staff may use for research, drafting, and analysis: OpenAI’s ChatGPT, Google’s Gemini chat, and Microsoft’s Copilot. This brings widely used AI tools into routine congressional workflows and clarifies what aides can rely on for nonclassified tasks. The move codifies a shift many staffers had already made in practice over the past two years.
Notably absent from the list is Anthropic’s Claude, a gap that stands out given Claude’s presence elsewhere in Washington. The House has permitted the use of Claude alongside ChatGPT, Gemini, and Copilot under internal guidance adopted earlier, so the Senate’s omission creates an immediate inconsistency between the two chambers. That split matters because staff who move between offices now face different rules for the same basic tasks.
“Copilot can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” the Senate memo states.
Anthropic’s absence coincides with a broader federal action that labeled the company a “supply chain risk,” a designation with real consequences for government use of private technology. The designation can block a company’s products from federal systems and networks, which effectively halted Anthropic’s tools across many agencies. That official move prompted an immediate and forceful corporate response.
Anthropic sued the Pentagon and several federal agencies, saying the designation was unlawful and beyond the government’s authority. The company’s legal filing paints the administration’s action as an unprecedented overreach and frames the dispute as more than a simple product ban. The lawsuit is now the venue for a larger debate over who sets the rules for AI in government.
The company said the administration’s move was “unprecedented and unlawful,” challenging the order that blocked Anthropic’s AI systems from federal use.
At the heart of the disagreement are competing views about guardrails and military use. Reporting around the dispute indicates Anthropic sought to restrict how the military could deploy Claude, asking for assurances that the technology would not power mass domestic surveillance or lethal autonomous weapons. Those limits reflect a corporate effort to shape downstream uses of its AI.
“The dispute stems from guardrails that Anthropic sought to impose on the military’s use of its Claude AI system,” the report explains. “The company sought assurances the technology would not be used for mass surveillance of Americans or to power lethal autonomous weapons.”
Defense planners, for their part, have been pushing to expand AI across intelligence analysis, cybersecurity, and operational planning, viewing these systems as critical infrastructure rather than experimental toys. That institutional push pulls in the opposite direction from companies that want contractual or ethical constraints on certain deployments. The tension between adoption and caution is now playing out in court and policy memos.
The Senate memo itself stays narrowly practical, listing approved tools and warning staff against entering classified or sensitive information into those systems. It does not mention the lawsuit, the supply chain designation, or Anthropic by name, and it focuses on day-to-day use cases rather than the national security fight. For staff, the guidance is a working document that clarifies permitted software and basic safeguards.
The differing approaches between the House and Senate create operational friction and underscore how fragmented policy can be inside one government. Offices in the two chambers now have different lists of acceptable AI tools, and federal agencies face yet another set of restrictions tied to the administration’s designation. Those mismatched rules complicate oversight, procurement, and cross-branch collaboration.
Practically speaking, ChatGPT, Gemini, and Copilot now have official clearance inside the Senate for routine tasks, while Claude remains excluded and is the subject of active litigation. The dispute touches on fundamental questions about private companies setting limits on their tech, the government’s authority to restrict suppliers, and how far military adoption of AI should extend. The outcome of the lawsuit and related policy moves will shape how Congress and the federal government use AI going forward.


Add comment