Online deepfakes have jumped from 500,000 in 2023 to about 8 million in 2025, with faster growth helping scammers and spreading false videos and calls.
In short: Deepfakes, which are fake videos or audio made with AI, are spreading far faster online and they are being used more often for scams and misinformation.
Deepfakes used to show up as occasional fake clips. Now they are appearing at mass scale. Estimates cited by the New York Times put online deepfakes at about 500,000 in 2023 and roughly 8 million in 2025, with yearly growth approaching 900%.
A big driver is how easy the tools have become. Newer AI systems can generate realistic faces, voices, and even full-body performances that can look real on social media or in low-quality video calls (like a blurry Zoom). Voice cloning is a major concern. With just a few seconds of someone’s speech, scammers can copy the person’s voice, including emotion and breathing, and use it to make convincing calls.
The result is more fraud and more confusion. Some reports say major retailers are receiving over 1,000 AI-powered scam calls a day. Businesses also report deepfake-related incidents, and social media services are taking down millions of suspected deepfake videos each month.
Detection is improving, but it is in a race. Companies are building “deepfake detectors,” which are tools that look for signs of fakery by checking multiple clues at once, such as the video, the audio, and how they match (like cross-checking a signature with a photo ID). Experts also warn that deepfakes may soon be made in real time during video calls, which could make it harder for people and companies to verify who they are talking to.
Source: NYTimes
12
Software Development18
Data & Analytics6
Audio & Video Production8
Productivity & Workflow12
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery8
Email & Communication6
Art & Illustration3
Customer Support1
Automation & Workflow1
HR & Recruiting2