TechCrunch reports that outside contractors working on Meta AI could access raw user data in training tasks, raising new privacy concerns.
In short: Reports say Meta used outside contractors for AI training tasks that included unfiltered personal user data, not a rogue AI system.
TechCrunch reported that in August 2025, people hired as contractors to help improve Meta AI were able to view raw personal information from users in the US and India. The report says this data included names, phone numbers, email addresses, Instagram usernames, gender, hobbies, and selfies.
According to one contractor quoted in the report, around 60 to 70 percent of roughly 5,000 tasks per week contained this kind of personal data. The contractors worked through third party firms, including Outlier and Alignerr, which Meta used to support AI training work.
The key point in the reporting is that there is no described evidence of a “rogue AI agent” exposing data by accident. Instead, the exposure happened because humans were asked to review information that was not properly filtered first, like handing someone a full folder of documents when they only needed a few pages.
The report also points to other recent privacy problems around Meta AI. One example is the April 2025 launch of a “Discover” feed where some user prompts, including sensitive personal situations, became public because of default settings.
Many people assume their chats, photos, and account details are only seen when absolutely necessary. If contractors can see unfiltered data during AI improvement work, it increases the chance of personal details being viewed by more people than users expect, and it can attract more scrutiny from regulators.
Source: TechCrunch AI
12
Software Development18
Data & Analytics6
Audio & Video Production8
Productivity & Workflow12
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery8
Email & Communication6
Art & Illustration3
Customer Support1
Automation & Workflow1
HR & Recruiting2