349
Audio & Video Production338
Software Development246
Automation & Workflow217
Writing & Content Creation201
Marketing & Growth188
Design & Creative168
AI Infrastructure & MLOps168
Photography & Imaging152
Voice & Speech133
Data & Analytics133
Education & Learning128
Customer Support123
Sales & Outreach122
Research & Analysis95
Researchers and forum posts show scammers and hackers are annoyed by AI-generated content flooding cybercrime communities and hurting trust and conversation.
In short: Some cybercrime forums are seeing more complaints about low-quality AI-written posts, even from the criminals who use those sites.
Researchers studying underground cybercrime forums found growing pushback against generative AI, meaning tools like ChatGPT that can write text for you. The team, led by University of Edinburgh researcher Ben Collier, reviewed 97,895 AI-related conversations from after ChatGPT launched in 2022 through the end of 2025.
The complaints sound like what you might hear on mainstream social media. People said forums were filling up with shallow, copy and paste style “explainers” that repeat basic security ideas. Others said they came for human conversation, not posts that read like they were written by a bot.
WIRED also reviewed posts on Hack Forums, a site for people who discuss hacking, and found similar anger. Some users said it “pisses” them off when members use AI to write threads. Others simply told people to stop posting “AI” content.
One reason this matters inside these communities is reputation. Many cybercrime forums work a bit like a marketplace plus a social club. Users build status by being helpful and trustworthy, and some members worry AI-written posts let people fake skill, like showing up to a study group with answers printed off a website.
Researchers say AI does not yet seem to have dramatically lowered the skill needed for lower-level cybercrime. Still, security analysts at Flashpoint report that hackers are discussing new AI-focused markets and newer models, while also warning that AI-generated tools and posts can be unreliable and sometimes expose the criminal’s own setup.
Source: Wired