328
Audio & Video Production320
Software Development244
Automation & Workflow209
AI Infrastructure & MLOps151
Marketing & Growth193
Writing & Content Creation199
Data & Analytics121
Design & Creative148
Customer Support123
Photography & Imaging141
Voice & Speech132
Sales & Outreach113
Operations & Admin86
Education & Learning121
Anthropic says its unreleased Claude Mythos Preview model could raise cybersecurity risks, so it is sharing it only with a 40-plus company group.
In short: Anthropic says its new, unreleased AI model, Claude Mythos Preview, is too risky to release publicly, so it is only sharing it with a large industry group to study cyber defenses.
Anthropic has announced an unreleased AI model called Claude Mythos Preview. The company says it is not safe to make it available to the public right now because of cybersecurity concerns.
The model was discussed on the April 9, 2026 episode of Hard Fork, a podcast from The New York Times hosted by Kevin Roose and Casey Newton. The hosts said rumors about the model circulated before the announcement, including a leaked draft blog post from about two weeks earlier.
On the show, the hosts described the buzz around Mythos as unusually tense, with people asking whether a model like this could make hacking easier or expose weaknesses across widely used software. They raised the idea of a “rewrite all software” moment, meaning companies might feel pressured to rethink how programs are built and secured, if the risks are as serious as feared.
Anthropic says it is giving access to Claude Mythos Preview only to Project Glasswing, a consortium of more than 40 tech companies. The goal is for these companies to work together on defenses against possible cyberattacks that the model could help enable. Anthropic has not shared full technical details or any public release timeline.
Cybersecurity is the protection of computers and online services from break-ins, like locks and alarms for the internet. If a powerful AI system can help attackers find and exploit weaknesses faster, everyday people could feel it through more data theft, service outages, and fraud. This limited release is meant to give defenders a head start, but it also raises questions about how much the public should be told about high-risk AI tools before they are widely available.
Source: NYTimes