355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Companies that pay hackers to find software flaws say AI is driving a surge in low quality bug reports, forcing some programs to pause or add stricter checks.
In short: More companies are being flooded with AI generated, low quality “bug bounty” reports, making it harder to find real security problems.
Bug bounty programs pay independent security researchers to find weaknesses in software, then report them responsibly (like paying someone to test your locks and tell you which window is easy to open). In recent months, companies running these programs say they are getting swamped by reports written with the help of generative AI, which can produce text quickly but is often wrong.
Bugcrowd, a bug bounty company whose customers include OpenAI, T-Mobile, and Motorola, said reports more than quadrupled over a three week period in March. It said most of those reports turned out to be false. Curl, a widely used tool for moving data online, suspended its paid bug bounty program in January after what its creator called an “explosion” of low quality AI reports.
Security leaders say the problem comes from several groups. Some are beginners using AI to try bug hunting for the first time. Others are experienced researchers who can be misled by AI “agents” (software that tries to carry out tasks on its own, like an assistant that sometimes makes confident mistakes). A third group is building automated systems that scan for issues and submit reports end to end.
HackerOne, which runs a bug reporting platform for groups including Goldman Sachs, Google, and the US Department of Defense, said submissions rose 76% in the year to March. It said the share of reports that were real vulnerabilities stayed steady at about 25%.
Companies are starting to respond with stricter background checks and more filtering, including using AI to sort incoming reports. Some groups, like Nextcloud, have paused programs until they can screen submissions more effectively. At the same time, platforms say higher quality AI assisted reports are also increasing, so the goal may be better screening rather than banning AI use.
Source: Financial Times