Follow America's fastest-growing news aggregator, Spreely News, and stay informed. You can find all of our articles plus information from your favorite Conservative voices. 

President Trump has ordered every federal agency to stop using Anthropic’s AI after a dispute with the Pentagon over deployment rules for the Claude model, accusing the company of trying to strong-arm the Department of War and putting troops at risk.

The standoff began when the Pentagon sought broader language to allow “all lawful use” of the Claude system in defense settings, while Anthropic insisted on limits that it says protect against domestic mass surveillance and fully autonomous weapons without human oversight. That split over operational scope and legal responsibility turned what might have been a technical procurement negotiation into a national security fight. The disagreement exposed contrasting priorities: a private company’s ethical boundaries versus the military’s need for flexible, lawful tools in combat.

Reports say Anthropic drew lines at two key points: no domestic mass surveillance and no fully autonomous weapons operating without meaningful human oversight. Those restrictions are already part of its defense agreements, and the company pushed to retain them even after integration. The Pentagon argued that once Claude is embedded, it should be available for all lawful uses needed by the Department of War to protect and equip service members effectively.

Per Pentagon spokesman Sean Parnell, Anthropic had a of 5:00 PM Eastern Friday to reach a deal. That deadline underscored how quickly a policy dispute can become an immediate operational problem. When deadlines meet differing interpretations of terms of service, the result is often abrupt decisions at the highest levels.

Anthropic has drawn its line at two points: no domestic mass surveillance and no fully autonomous weapons operating without meaningful human oversight. Those limits are already written into its defense agreements. The Pentagon wants broader language that covers “all lawful use” once Claude is embedded.

The Pentagon’s top officials framed the issue as one of command and legal authority. A senior official emphasized that the Department must be able to give warfighters the tools they need while following laws passed by Congress, not being second-guessed by private vendors. That tension—between corporate terms and government responsibility—became the focal point for critics who worry that critical defense capabilities could be held hostage by private policy stances.

“For any AI system we might use, are we using it to protect our warfighters in the right way? Are we using it to give them the best tools to be efficient and lethal?

“Ultimately, at the end of the day, we follow the law—all laws—but we can’t let any one company stand between us and the warfighter. They don’t make the rules. Congress makes the rules, @POTUS signs them, we execute them—and we do so safely.”

Then President Trump weighed in directly on Truth Social, delivering a hardline presidential order that left no room for negotiation. He labeled Anthropic “A RADICAL LEFT, WOKE COMPANY” and said its leadership had made “a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” That language signaled a political as well as operational rupture between the White House and the AI firm.

THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. 
 
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. 
 
Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
 
WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
 
PRESIDENT DONALD J. TRUMP

The president’s directive orders an immediate cessation of Anthropic products across federal agencies, with a six-month phase-out for the Department of War. That timetable forces agencies to scramble for replacements or to revert to older systems while ensuring continuity of operations. It also sets up potential legal fights over contracts, transition costs, and the scope of presidential authority.

From a national security perspective, the move sends a clear message: the government will not tolerate private conditions that limit military options. Supporters will argue this restores command authority and prioritizes troop safety and mission success. Critics will warn about chilling effects on private firms that might otherwise offer advanced tools under protective terms.

Anthropic now faces a stark choice: change its terms and cooperate with the government’s operational needs, or lose a major customer in the federal government. How the company responds could shape future relationships between tech firms and the defense establishment. For other AI providers, the dispute will be a case study in balancing ethical stances against the strategic demands of national defense.

The next steps could include written directives to agencies, rapid procurement of alternatives, and possible investigations into contract obligations. Congress and the courts may eventually sort out legal boundaries, but in the near term agencies must keep missions running. The situation remains fluid and could influence broader policy decisions about AI and military integration for years to come.

That looks to be rather definitive. I guess we’ll see…

Add comment

Your email address will not be published. Required fields are marked *