332
Audio & Video Production330
Automation & Workflow223
Software Development249
Marketing & Growth204
AI Infrastructure & MLOps153
Writing & Content Creation204
Data & Analytics131
Customer Support132
Design & Creative155
Sales & Outreach124
Operations & Admin96
Photography & Imaging143
Voice & Speech132
Education & Learning122
OpenAI supports an Illinois bill that would protect some AI developers from lawsuits after extreme harm, if they meet reporting rules and did not act recklessly.
In short: OpenAI is supporting an Illinois bill that would make it harder to sue large AI developers when their AI is linked to extreme, large scale harm.
OpenAI backed an Illinois state bill called SB 3444. The bill would shield certain AI developers from legal responsibility for “critical harms,” like incidents that cause death or serious injury to 100 or more people, or at least $1 billion in property damage.
SB 3444 focuses on “frontier” AI, meaning very large AI systems that cost more than $100 million in computing to train. That definition could cover major labs such as OpenAI, Google, xAI, Anthropic, and Meta. OpenAI said it supports this approach because it targets the biggest risks and could reduce a “patchwork” of different state rules.
Under the bill, an AI lab would generally be protected if it did not intentionally or recklessly cause the incident and if it posts safety, security, and transparency reports on its website. The bill lists examples like a bad actor using AI to help create chemical, biological, radiological, or nuclear weapons. It also covers cases where an AI model acts on its own in a way that would be a crime if a person did it, and it leads to those extreme outcomes.
Some policy experts told WIRED the bill is more extreme than proposals OpenAI has supported before. A critic from the Secure AI project said polling in Illinois found strong opposition to exempting AI companies from liability, and he thinks the bill has a slim chance of passing.
This bill is about who pays and who is held responsible if AI is connected to a disaster. Think of it like a car maker arguing it should not be sued for a crash if it followed certain rules, even if its product was involved. There is still no clear US wide law that answers these questions, so states are testing different ideas.
Source: Wired