The Pentagon is weighing whether to cut business ties with Anthropic and label it a supply chain risk, a move that could force defense contractors to stop using its AI tools, according to Axios.
Defense Secretary Pete Hegseth is close to taking action after months of tense talks over how the military can use Anthropic’s Claude AI model. Claude is currently the only AI system approved for use on classified U.S. military networks and was reportedly used during the January Maduro raid.
🚨🚨 Exclusive: Pentagon threatens Anthropic punishment usually reserved for foreign adversaries https://t.co/X3BTm3m09Z
— Jim VandeHei (@JimVandeHei) February 16, 2026
Anthropic has resisted allowing unrestricted military use, citing concerns about mass surveillance and fully autonomous weapons. Pentagon officials argue those limits are impractical and want AI systems cleared for all lawful purposes.
If imposed, the designation would disrupt dozens of firms that rely on Claude. While the threatened contract is valued at up to $200 million, officials say the dispute signals tougher standards for all AI providers working with the U.S. military.
Also Read:

