Reports

Trump Orders Federal Agencies to Drop Anthropic Amid Pentagon AI Standoff

TSP Reporter
Written by TSP Reporter

US government conflict over military use of artificial intelligence

President Donald Trump issued a directive ordering all US federal agencies to immediately stop using technology from the artificial intelligence company called Anthropic. The order comes amidst an escalating standoff between the Pentagon and the company over how its AI models can be used.

Trump wrote on the Truth Social platform that federal agencies “don’t need it, don’t want it, and will not do business with them again.” Agencies have been instructed to cease in entirety the use without delay, although a six-month phase‑out period has been included for systems still in operation. Trump has instructed that Anthropic must cooperate during that transition, or face “major civil and criminal consequences.”

The Pentagon Stand-Off

The conflict stems from disagreements between the Pentagon and Anthropic over how the startup’s AI, particularly its Claude model, may be deployed:

Pentagon’s Position:

The military wants unrestricted access to use AI models “for all lawful purposes,” including defence and national security applications.

Officials argue that companies with defence contracts cannot impose mission restrictions that limit military use of technology.

The Pentagon gave Anthropic a deadline (5:01 p.m. ET on a Friday) to drop its restrictions or face consequences such as contract termination or being declared a supply chain risk.

Anthropic’s Position:

CEO Dario Amodei said the company “cannot in good conscience accede” to demands to allow unrestricted use, especially for autonomous weapons without human oversight or mass domestic surveillance.

Anthropic said the latest contract language did not sufficiently protect against these uses and reaffirmed its intent to continue negotiating.

Pentagon Pressure Tactics

Before Trump’s directive, the Pentagon, under Defence Secretary Pete Hegseth, had already escalated pressure by giving Anthropic an ultimatum to remove safety guardrails from its AI models.

This was done by threatening to invoke the Defence Production Act, a Cold War‑era authority that could compel companies to change production or technology use in the name of national defence, though never used before for AI.

Warning that Anthropic could be labelled a “supply chain risk,” a designation traditionally applied to companies based in foreign adversary nations (example, telecom firm restrictions).

Industry experts say this standoff could establish precedent for how US military and technology firms negotiate ethical and legal limits on AI in defence.

Industry Reactions

The dispute has drawn unusual solidarity within the AI sector:

OpenAI publicly expressed agreement with some of Anthropic’s “red lines” especially on restricting AI use in autonomous lethal systems and mass surveillance.

Rival companies and employees have criticized the Pentagon’s stance, saying it risks undermining responsible AI principles.

This confrontation marks one of the most public clashes between the US government and a major AI company over ethical control of powerful technology. The outcome could influence the future defence contracting and AI industry relations and how military AI systems are governed and regulated

Also, the question of autonomy that companies will retain over how their innovations are utilized in warfare and intelligence

The dispute also bring in forefront the ongoing tensions between national security priorities and private sector ethical frameworks especially as advanced AI becomes a core part of defence capabilities.

About the author

TSP Reporter

TSP Reporter

Leave a Comment