The US government moved to block Anthropic tools after the company refused to change safety limits on its Claude AI for military uses.
In short: The Pentagon ordered federal agencies to stop using Anthropic’s AI after a dispute over safety limits on its Claude system.
On February 27, 2026, the Trump administration directed federal agencies to stop using Anthropic’s technology. It said Anthropic was a “supply chain risk,” a label the government also uses when it worries a product could be unsafe to depend on.
The dispute centers on Anthropic’s AI assistant, Claude (a chatbot, meaning software that talks and writes like a person). According to the report, the government wanted Anthropic to remove some of Claude’s built-in limits so it could be used more freely for military purposes.
Anthropic CEO Dario Amodei refused. He said the limits are meant to prevent uses like mass domestic surveillance and fully autonomous weapons, meaning weapons that can select and attack targets without a person making the final call. He also argued that current AI can be unreliable without human oversight, like a confident intern who still needs a manager to check the work.
Defense Secretary Pete Hegseth criticized Anthropic’s stance, saying a private company should not be able to block military decisions. Meanwhile, competitors such as OpenAI and xAI have reportedly been willing to accept “all lawful use” terms, which could let them fill the gap left by Anthropic.
This is a rare case of the US government effectively blacklisting a major US AI company. It highlights a growing split between companies that want strict safety limits and government agencies that want fewer constraints for national security work.
Source: Financial Times
12
Software Development18
Data & Analytics6
Audio & Video Production8
Productivity & Workflow11
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery7
Email & Communication5
Art & Illustration3
Customer Support1
HR & Recruiting2
Writing & Content Creation3