AI ToolsCategoriesTagsCompareNewsDocsDiscountsSubmitAdvertise
LogoAIDIRECTORY
CategoriesNewsDiscountsAdvertise
Submit
Join the Community

Create a free account to bookmark tools, write reviews, and get personalized updates.

hi@aidirectory.com
Browse:AI ToolsCategoriesTagsCompareDiscountsReviewsBlogNewsLiveDocs
Quick Links:Submit ToolAboutAdvertisePoliciesTerms of ServicePrivacy Policy

© 2026, AIDIRECTORY. All rights reserved.

AIDIRECTORY is a discovery platform that aggregates information about AI tools and software from publicly available sources. All tool listings, descriptions, and comparisons are for informational purposes only and do not constitute endorsement or recommendation.

References made to third-party names, logos, and trademarks on this website are to identify corresponding products. Unless otherwise specified, the trademark holders are not affiliated with AIDIRECTORY, our products, or website, and they do not sponsor or endorse AIDIRECTORY services. Such references are included strictly as nominative fair use under applicable trademark law and remain fully the property of their respective trademark holders.

Ad
Favicon of Your brand hereYour brand here — This spot is lonely. Your brand would look great here. What do you think?
Advertise on AIDIRECTORY
/News/Pentagon tells agencies to stop using Anthropic after Claude dispute

Pentagon tells agencies to stop using Anthropic after Claude dispute

The US government moved to block Anthropic tools after the company refused to change safety limits on its Claude AI for military uses.

1 day ago•AI Policy & Regulation

In short: The Pentagon ordered federal agencies to stop using Anthropic’s AI after a dispute over safety limits on its Claude system.

What happened

On February 27, 2026, the Trump administration directed federal agencies to stop using Anthropic’s technology. It said Anthropic was a “supply chain risk,” a label the government also uses when it worries a product could be unsafe to depend on.

The dispute centers on Anthropic’s AI assistant, Claude (a chatbot, meaning software that talks and writes like a person). According to the report, the government wanted Anthropic to remove some of Claude’s built-in limits so it could be used more freely for military purposes.

Anthropic CEO Dario Amodei refused. He said the limits are meant to prevent uses like mass domestic surveillance and fully autonomous weapons, meaning weapons that can select and attack targets without a person making the final call. He also argued that current AI can be unreliable without human oversight, like a confident intern who still needs a manager to check the work.

Defense Secretary Pete Hegseth criticized Anthropic’s stance, saying a private company should not be able to block military decisions. Meanwhile, competitors such as OpenAI and xAI have reportedly been willing to accept “all lawful use” terms, which could let them fill the gap left by Anthropic.

Why it matters

This is a rare case of the US government effectively blacklisting a major US AI company. It highlights a growing split between companies that want strict safety limits and government agencies that want fewer constraints for national security work.

Source: Financial Times

Ad
Favicon

 

  
 

Share:

Ad
Favicon of Your brand hereYour brand here — This spot is waiting for a smart brand. That could be you.
Advertise on AIDIRECTORY
Popular Categories:
AI Infrastructure & MLOps

12

Software Development

18

Data & Analytics

6

Audio & Video Production

8

Productivity & Workflow

11

Voice & Speech

5

Sales & Outreach

5

Design & Creative

5

Marketing & Growth

4

Search & Discovery

7

Email & Communication

5

Art & Illustration

3

Customer Support

1

HR & Recruiting

2

Writing & Content Creation

3


Popular Tags:
Freemium

34

Subscription

27

Developers

24

Workflow Automation

4

AI Agents

3

Content Creators

12

Pay-As-You-Go

14

Agency Teams

17

Data Analysis

7

Contact for Pricing

6

Marketers

8

Speech-to-Text (STT)

13

Text Generation

9

Transcription

10

Free Trial

9

Ad
Favicon of PromptmonitorPromptmonitor
How often does AI recommend your brand to customers?
Fix That Now
Favicon of Promptmonitor