355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Editors and reviewers report a surge of AI-written research papers that look real at first glance, making it harder to check what is reliable.
In short: Scientists and journal editors say AI is making it easy to produce large numbers of research papers that are hard to spot and hard to verify.
Researchers say academic journals are being swamped with papers that appear well written but add little value, or contain subtle errors. One sign is a sudden wave of papers citing older studies at an unusual rate. A researcher in Switzerland, Peter Degen, traced a burst of citations of a 2017 paper to many similar papers that reused the same public health dataset to produce endless “prediction” studies.
Editors say the problem is getting worse because the writing now looks more natural. Marit Moe-Pryce, managing editor of the journal Security Dialogue, said submissions are up 100 percent compared with a year ago. She said it is harder to tell if a paper is genuine, partly AI-written, or fully AI-generated.
A key issue is that checking a paper still takes human time. AI can draft a paper quickly, but peer review relies on experts volunteering to read closely and look for mistakes. Several editors reported having to contact far more reviewers than before, sometimes 20 people to get two responses.
Some journals have started restricting papers that reuse public datasets because many submissions follow the same template. Editors also worry about newer “agentic” AI tools, meaning AI systems that can plan steps, analyze data, and write a full paper with little help (like an intern that can work fast, but still needs supervision).
Publishers are debating a shift from trying to catch fake papers to requiring proof that the work is real, such as sharing more underlying data or verifying images. A bigger question is whether universities and funders will change incentives that reward quantity of papers, since an “infinite paper-writing machine” fits too well with a system that counts publications.
Source: The Verge AI