March 19 - Pentagon staffers, former officials and IT contractors who work closely with the U.S. military say they are reluctant to give up Anthropic’s AI tools, which they view as superior to alternatives, despite orders to remove them.
After a dispute between Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March 3, barring its use by the Pentagon and its contractors following a six-month phase-out.
But the move is running into resistance, with some military users dragging their feet and others preparing to revert to Anthropic's platform in anticipation of the dispute being resolved.
“Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI,” said one IT contractor. “They think it’s stupid.” The contractor said Anthropic’s Claude AI model “is the best,” while xAI’s Grok often produced inconsistent answers to the same query.
RECERTIFYING SYSTEMS COULD TAKE MONTHS
The complaints suggest uprooting Anthropic from the Pentagon's networks will be neither quick nor painless. One contractor said recertifying systems that run on Anthropic's products for military use could take months.
Some Pentagon officials, staff and contractors spoke anonymously because they were not authorized to speak publicly.
The Defense Department, Anthropic and xAI did not respond to requests for comment.
AI tools have become essential for the U.S. military, which uses them for tasks ranging from targeting weapons and helping plan operations to handling classified material and analyzing information.
Anthropic announced a $200 million defense contract in July 2025 and quickly became embedded in the military's workflow. Claude became the first AI model approved to operate on classified military networks, and officials familiar with its use said adoption was strong. Within the federal government, Anthropic’s models were widely viewed as more capable than rival offerings.
Reuters has previously reported that the Pentagon used Claude tools to support U.S. military operations during the conflict with Iran, and sources said the technology remains in use despite the blacklisting. One expert described that as “the clearest signal” of how highly the Pentagon values the tool.
Furthermore, "It's a substantial cost to replace those models with alternatives," said Joe Saunders, the CEO of government contractor RunSafe Security. Saunders added that those alternative systems would go through a long process to recertify them for use on classified or military networks.
In the case of an existing system being replaced with a new one, certification could take 12 to 18 months, he said.
"It's not just costly, it's a loss of productivity," added Saunders, who helped the military incorporate AI chatbots.
Orders to stop using Claude are filtering through the Pentagon. One official said staff are complying because “no one wants to end their career over this,” but described the shift as wasteful.
Tasks previously handled by Claude, such as querying large datasets for information, are in some cases now being done manually with tools such as Microsoft Excel, the official said. Anthropic's Claude Code tool was widely used within the Pentagon to write software code, several of the people said.
Losing that tool has left developers frustrated, another senior official said, but added they should not rely on a single tool.
TOUGH TRANSITION
Removing Claude will be a major undertaking.
For example, Palantir's Maven Smart Systems – a software platform that supplies militaries with intelligence analysis and weapons targeting – uses multiple prompts and workflows that were built using Anthropic's Claude Code, according to two people familiar with the matter. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said.
Some staff are "slow-rolling" their replacement of Claude because they are actively using it to create workflows, which are series of automated tasks, a Pentagon technologist said.
Developers are frustrated because shifting to new AI agents would mean losing the agents they created to sift through vast amounts of data.
The Defense Department has ordered contractors, including major defense firms, to assess and report their reliance on Anthropic products and to begin winding them down. Officials and contractors say they now face a strategic question: whether to pivot quickly to OpenAI, Google or xAI, or to unwind Anthropic in a way that allows for a rapid return if the Pentagon reinstates it.
One chief information officer at a federal agency said it plans to slow‑roll the phase‑out, betting that the government and Anthropic will reach an agreement before the six‑month deadline.
"What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level," said Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.



































