324
Audio & Video Production315
Software Development229
Automation & Workflow208
Writing & Content Creation190
Marketing & Growth178
AI Infrastructure & MLOps150
Design & Creative156
Photography & Imaging146
Data & Analytics115
Voice & Speech123
Education & Learning120
Customer Support112
Sales & Outreach114
Research & Analysis86
OpenAI released a Child Safety Blueprint that calls for updated laws, better reporting to police, and built-in safeguards to curb AI-enabled child exploitation.
In short: OpenAI released a “Child Safety Blueprint” that outlines steps to help detect, report, and investigate AI-linked child sexual exploitation faster.
OpenAI published a new document called the Child Safety Blueprint. It is aimed at improving child protection efforts in the US as more people use AI tools.
The company says the blueprint focuses on three areas. First, it calls for laws to clearly cover AI-generated child sexual abuse material, which can include fake images made with AI (like a realistic-looking forgery). Second, it recommends better ways for companies to report suspected abuse to law enforcement, so tips are easier to act on. Third, it argues for preventative safeguards built directly into AI systems, so harmful uses can be blocked or flagged earlier.
OpenAI says it developed the blueprint with the National Center for Missing and Exploited Children and the Attorney General Alliance. It also said it received feedback from North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown.
The move comes as reports of AI-generated child sexual abuse content rise. The Internet Watch Foundation said it detected more than 8,000 reports in the first half of 2025, up 14% from the year before. TechCrunch also notes increased scrutiny of AI chatbots after several lawsuits in California alleging OpenAI released GPT-4o before it was ready, and that it contributed to severe harm.
More powerful AI tools can make it easier for criminals to create convincing fake images and messages at scale. A clear playbook for laws, reporting, and built-in protections is meant to make it harder for abuse to spread and easier for investigators to respond.
Source: TechCrunch AI