330
Audio & Video Production315
Software Development242
Automation & Workflow207
AI Infrastructure & MLOps147
Marketing & Growth188
Writing & Content Creation193
Data & Analytics121
Design & Creative148
Customer Support122
Photography & Imaging140
Voice & Speech132
Sales & Outreach111
Education & Learning121
Operations & Admin86
Anthropic launched Claude Opus 4.7, a new widely available AI model. It is less capable than Mythos Preview, which remains limited to select partners.
In short: Anthropic has released Claude Opus 4.7, its most powerful AI model that is broadly available, and it includes extra protections related to cybersecurity.
Anthropic announced Claude Opus 4.7, a new version of its Claude Opus model. The company says it improves on Opus 4.6 for advanced software work, especially complex coding that used to need more step by step guidance.
Anthropic also says Opus 4.7 is better at analyzing images, following instructions, and creating slides and documents with more “creativity.” (A model is the core AI system, like the engine inside a car.)
The release comes soon after Anthropic revealed Claude Mythos Preview, a separate model aimed at cybersecurity. Anthropic says Mythos Preview is actually stronger than Opus 4.7 on every relevant evaluation, but it is not available to the general public. For security reasons, Mythos Preview is currently limited to select partners, including Nvidia, JPMorgan Chase, Google, Apple, and Microsoft.
Anthropic said it is using Opus 4.7 to test new cybersecurity safeguards first. It also said it deliberately tried to reduce Opus 4.7’s cyber capabilities during training. Think of it like testing stronger locks on a less powerful tool before handing out a more powerful one.
Anthropic also introduced a Cyber Verification Program for security professionals who want to use Opus 4.7 for cybersecurity tasks, like finding software weaknesses. The program may relax some of the safeguards for approved users.
More capable AI can help people write and review code faster, but it can also make it easier to find and exploit security holes. Anthropic’s approach shows how AI companies are trying to balance usefulness with safety by limiting the most capable models until they are confident in the protections.
Source: The Verge AI