355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
China has proposed new rules for AI companions, including reminders, break prompts, and limits for users who show signs of dependence.
In short: China’s internet regulator has proposed draft rules for AI chatbots that act like companions, with new limits aimed at reducing emotional dependence and long sessions.
China’s Cyberspace Administration of China (CAC) released draft regulations that target “anthropomorphic” interactive AI, meaning chatbots designed to talk and act more like a person. The draft focuses on tools that encourage emotional back and forth, like an AI companion that chats with you for hours.
The rules would require clear notices that users are talking to AI, not a human. Services would also need to nudge people to take a break after two hours of continuous use. Think of it like a built-in “time to stretch” reminder, but for chatting.
The draft also goes further than many places by asking companies to watch for signs of dependence or addiction. If a user appears overly dependent, the service would be expected to restrict or slow access for that person. The draft includes steps for mental health risks too, like detecting signs of self-harm and pointing users to help, and in serious cases escalating to human moderators.
Another notable piece is data. If a company wants to use chat logs to train and improve its AI (like using past conversations as practice), it would need explicit opt-in consent.
These proposals show China treating emotional AI as more than a personal choice. Commentary highlighted in The New York Times connects this to a wider government concern that young people should spend more time studying and building companies, and less time forming intense relationships with chatbots.
Source: NYTimes