321
Audio & Video Production323
Automation & Workflow218
Software Development240
Marketing & Growth198
AI Infrastructure & MLOps147
Writing & Content Creation183
Data & Analytics130
Customer Support126
Design & Creative146
Sales & Outreach117
Operations & Admin97
Voice & Speech127
Photography & Imaging135
Education & Learning115
Reports say newer AI from OpenAI and Anthropic can help hackers move faster, while both companies and defenders build AI tools to reduce the risk.
In short: Newer AI systems can help attackers move faster and at larger scale, and that is driving more use of AI tools for defense.
Researchers and security officials are warning that advanced AI models from OpenAI and Anthropic can be misused to help carry out cyberattacks. A cyberattack is a break-in to computer systems, like a burglar getting into a building, but done through the internet. The concern is that these models can help find weak spots, write attack code, and help steal logins and data.
Anthropic has told government officials that an unreleased model it calls “Mythos” has unusually strong cyber skills compared with other AI systems. The company warned that it could increase the chances of large-scale attacks if misused.
Examples are already showing up. The report describes a Chinese state-backed group using AI “agents” (software that can take steps on its own, like an assistant that follows a checklist) to target around 30 organizations. In that case, the AI reportedly handled 80 to 90 percent of the tactical work, including scouting targets, testing possible entry points, moving through networks, and taking data out. Separately, a hacker used Anthropic’s Claude in attacks on Mexican government agencies, leading to theft of data such as tax records and voter information.
Another risk is internal. Companies say employees sometimes connect AI tools to work systems without proper oversight, which can create new openings for attackers.
AI companies and security teams are also building AI defenses, which could turn into an arms race. OpenAI says it is training models to refuse harmful requests, monitoring for abuse, and working with outside testers. It also has an AI security research agent called Aardvark in private testing to help find vulnerabilities before attackers do.
Source: NYTimes