316
Audio & Video Production295
Software Development226
Automation & Workflow199
Writing & Content Creation182
Marketing & Growth174
AI Infrastructure & MLOps142
Design & Creative144
Photography & Imaging139
Data & Analytics108
Voice & Speech122
Customer Support110
Education & Learning116
Sales & Outreach105
Research & Analysis85
OpenAI expanded its Trusted Access for Cyber program with a cyber-focused model for vetted security professionals, highlighting a controlled approach.
In short: OpenAI is widening a controlled program that lets approved cybersecurity professionals use a specialized ChatGPT model, rather than opening access to everyone.
OpenAI, the company behind ChatGPT, has expanded its Trusted Access for Cyber program. The program offers a cyber-focused version of its model, described as GPT-5.4-Cyber, to vetted cybersecurity professionals.
This is not the same as making cybersecurity tools “open” to the public. It is closer to giving a spare key only to verified locksmiths, not leaving the door unlocked for anyone to try.
The available information also highlights why OpenAI is taking a careful approach. ChatGPT can help defenders by spotting signs of attacks, helping analyze suspicious activity, training staff to recognize phishing, and simulating attack scenarios (like a fire drill, but for hacking).
AI tools can improve security, but they can also be misused. The same kind of system that helps write safe code can sometimes be pushed to help write harmful code, or to produce more convincing scam emails. That risk is one reason some companies have restricted or banned employee use of ChatGPT in certain settings, especially where sensitive data could be shared.
Some reports have suggested OpenAI is taking a “more open” cybersecurity approach than rival Anthropic. However, the information provided here does not include details about Anthropic’s cybersecurity strategy, so a factual comparison is not possible based on these sources.
Source: NYTimes