332
Audio & Video Production332
Automation & Workflow223
Software Development249
Marketing & Growth204
AI Infrastructure & MLOps154
Writing & Content Creation204
Data & Analytics132
Customer Support134
Design & Creative155
Sales & Outreach123
Operations & Admin97
Photography & Imaging143
Voice & Speech132
Education & Learning122
A US appeals court denied Anthropic’s request to pause a Pentagon “supply-chain risk” label, creating conflicting rulings in the company’s legal fight.
In short: A US appeals court said Anthropic cannot temporarily remove a Pentagon “supply-chain risk” label, even as a different court recently ordered a similar label lifted.
A US appeals court in Washington, DC ruled that Anthropic did not meet the high bar needed to pause a Pentagon decision that labels the company a “supply-chain risk.” This label can block the US military and its contractors from using a company’s products.
The ruling conflicts with a separate decision from a federal judge in San Francisco. That judge said the Department of Defense likely acted in bad faith and ordered a supply-chain risk label removed, and the Trump administration complied by restoring access to Anthropic tools across the Pentagon and the wider federal government.
The disagreement is partly because there are two similar supply-chain laws involved. Each court is considering only one of them, so the decisions are not addressing the exact same legal question.
In its decision, the appeals court said pausing the label could force the military to keep working with an “unwanted vendor” for important AI services during an ongoing conflict. Anthropic has argued it is being punished for pushing limits on how its AI system, Claude, can be used, including objecting to uses like unsupervised drone strikes.
This case helps set the rules for how much power the US government has to cut tech companies out of military work, especially when national security is involved. For everyday people, it affects how quickly and how widely AI tools may be adopted inside government, and whether companies can safely speak up about AI limits without risking major contracts.
Source: Wired