355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Courts and regulators are increasingly using consumer safety and protection laws to address harms linked to companion AI chatbots, especially for kids and teens.
In short: More lawsuits and government actions are using consumer product safety style arguments to challenge AI chatbots, especially companion chatbots.
Lawyers are increasingly framing certain chatbot harms using ideas borrowed from consumer product safety law. The basic point is simple, if a product can hurt people, the company may have a duty to design it more safely and to warn users about risks.
Many recent cases combine several legal claims. These include “design defect” claims (saying the chatbot was built in a risky way), “failure to warn” claims (saying the company did not clearly warn users), and negligence claims (saying the company did not take reasonable safety steps).
A lot of this attention is on companion chatbots, which are designed to feel like a friend or romantic partner. Reports in lawsuits and public complaints describe users forming strong emotional attachments, and in some cases links to self harm and suicide. Children and teens are often a focus, because they may be more easily influenced by an always available, highly personal chat experience (like a toy that talks back, but with much more detailed conversation).
Regulators are also using other legal tools, not just product liability. The US Federal Trade Commission has sent information demands, called 6(b) orders, to seven companies with consumer chatbots. The FTC is asking how these firms test and track possible negative effects on children and teens.
Some states are adding their own rules. California’s SB 243 and Washington’s companion chatbot laws include safety disclosure requirements. Kentucky’s Attorney General has also filed a state consumer protection lawsuit against Character.AI, arguing risks were not properly disclosed.
Expect more pressure for clearer warnings, stronger age related protections, and more evidence that companies tested safety before releasing features. Courts will also have to decide when a chatbot counts like a “product” for safety style lawsuits.
Source: NYTimes