President Donald Trump has announced a directive to all federal agencies to immediately discontinue their use of Anthropic’s artificial intelligence technology. This decision, conveyed in a Truth Social post, stems from a disagreement over the company’s refusal to permit the U.S. military unrestricted use of its AI models, particularly for applications like mass domestic surveillance or autonomous weapons systems. Despite the broad mandate, the Pentagon has been granted a six-month grace period to phase out Anthropic’s technology, acknowledging its current integration into classified military operations.
The core of the dispute centers on Anthropic’s ethical guidelines, which prohibit the deployment of its Claude models for purposes the company deems problematic. This stance has put the San Francisco-based AI firm at odds with the Defense Department, which has reportedly insisted on the right to utilize the technology in all lawful contexts. Trump, labeling Anthropic as “woke” and “leftwing,” asserted that the company’s position jeopardizes national security and endangers troops. His public statement declared an end to federal dealings with Anthropic, adding a veiled threat of presidential power to compel compliance should the company not adjust its policy.
Defense Secretary Pete Hegseth had previously upped the ante, threatening to revoke Anthropic’s $200 million contract with the U.S. military. He also raised the prospect of categorizing the company as a “supply-chain risk,” a designation typically reserved for entities associated with foreign adversaries, effectively preventing other Pentagon contractors from engaging with Anthropic. Hegseth further suggested the potential invocation of the Defense Production Act, which could force Anthropic to provide an unrestricted version of its Claude AI on grounds of national security.
In response to these escalating pressures, Dario Amodei, Anthropic’s CEO, articulated the company’s unwavering position in a letter, stating, “We cannot in good conscience accede to their request.” This firm refusal was met with sharp criticism from Emil Michael, the Pentagon’s undersecretary for research and engineering, who publicly accused Amodei of having a “God complex” and attempting to control the U.S. military through his company’s policies. While the Defense Department maintains it has no intention of engaging in mass surveillance or removing human oversight from weapons targeting, the divergence over the practical definitions of “autonomous” and “surveillance” remains a significant point of contention.
Anthropic had been a rare exception, being the sole AI company cleared for use in classified government settings. This landscape has recently shifted, however, with Elon Musk’s xAI reportedly agreeing to allow the Pentagon to use its AI under lawful conditions. Other major players in the AI sector, including Google and OpenAI, are currently utilized in unclassified environments and are engaged in discussions with the Defense Department regarding potential classified work. The broader tech industry appears to be increasingly grappling with similar ethical considerations. OpenAI CEO Sam Altman, for instance, informed his employees that his company would advocate for limitations on autonomous weapons and mass surveillance akin to Anthropic’s. Concurrently, more than 100 Google employees have reportedly sent a letter to chief scientist Jeff Dean, urging similar restrictions on the military’s use of their Gemini AI models. This developing situation underscores a growing tension between national security interests and the ethical frameworks guiding leading artificial intelligence developers.


