321
Audio & Video Production295
Software Development227
Automation & Workflow201
Writing & Content Creation183
Marketing & Growth174
AI Infrastructure & MLOps144
Design & Creative144
Data & Analytics108
Photography & Imaging139
Customer Support114
Voice & Speech121
Sales & Outreach108
Education & Learning116
Operations & Admin78
A Wired test using Charlemagne Labs’ tool showed an AI model could write and carry out a realistic phishing-style conversation to lure a target into clicking a link.
In short: Security researchers showed that today’s AI can carry on realistic back and forth chats that look like a convincing phishing scam.
A reporter at Wired watched a simulated phishing attempt play out that was written and managed by an AI model called DeepSeek-V3. Phishing is when someone tries to trick you into clicking a link or sharing access, often by pretending to be a real person or offering a tempting opportunity.
The messages were tailored to the target. They referenced specific interests and sounded friendly and informed, like a well researched stranger at a conference. Over multiple emails, the “attacker” kept the conversation going and tried to nudge the target toward a link to a Telegram bot.
This was not a real attack. It was a controlled test run using a tool from a startup called Charlemagne Labs. The tool can assign different AI models to play “attacker” and “target,” then run many trials to see which models are most convincing and which models spot the scam.
The reporter also tested other AI models, including Anthropic’s Claude, OpenAI’s GPT-4o, Nvidia’s Nemotron, and Alibaba’s Qwen. Some attempts were clumsy or refused to participate, but the overall takeaway was that AI can make it easier to produce large amounts of believable scam messages.
Experts quoted by Wired said many workplace hacks start by manipulating people, not by breaking software. As AI writing gets better, it may become harder to tell when a message is a scam, especially if it includes personal details scraped from public sources. Tools that flag suspicious messages, like spam filters but for chat and email, may become more common, but users will still need to be careful with unexpected links and requests.
Source: Wired