355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Experts and college students tested AI agents in simulated hacking and defense exercises, and the agents completed many tasks on their own.
In short: Cybersecurity contests are increasingly letting AI agents work alongside experts and students to attack and defend practice networks, and the agents are holding their own.
Events described as national-style cybersecurity competitions are putting “AI agents” into red team and blue team exercises. A red team tries to break in, like a safecracker testing a bank vault. A blue team tries to spot the break-in and stop it.
An AI agent is a type of AI that can take actions on its own, not just answer questions. You can think of it like a junior coworker that can follow a plan, click through tools, and keep going without someone guiding every step.
According to The New York Times, experts and college students used AI agents in these contests to try to break into and defend computer networks. The agents also performed reasonably well without constant human help, which is one reason organizers are testing them in controlled, simulated environments.
These contests matter because they can show how quickly AI agents might become useful to both defenders and attackers. If an agent can find weak spots and move through systems faster than a person, security teams may need new rules, stronger monitoring, and more training to keep up. Expect more events that measure not just whether an agent succeeds, but whether it can be kept safe from manipulation and used responsibly.
Source: NYTimes