329
Audio & Video Production320
Software Development244
Automation & Workflow209
AI Infrastructure & MLOps152
Marketing & Growth195
Writing & Content Creation199
Data & Analytics121
Customer Support123
Design & Creative148
Photography & Imaging141
Sales & Outreach113
Voice & Speech132
Operations & Admin86
Education & Learning121
New York assembly member Alex Bores, who helped pass the RAISE AI law, says a tech-funded super PAC is spending heavily to block his run for Congress.
In short: A super PAC funded by prominent tech figures is campaigning against New York assembly member Alex Bores after he helped pass a tough AI safety law.
New York State Assembly member Alex Bores, a Democrat and former Palantir employee, is running for Congress in New York’s 12th District. He previously worked in tech and has a graduate degree in computer science.
Bores helped pass New York’s RAISE Act, which became law in 2025. The law applies to only the biggest AI companies, based on how advanced their systems are and how much money they make. It requires those companies to publish safety plans for their AI models (the “model” is the AI system, like the engine inside an app), report serious safety incidents to the state, and follow their own stated rules.
Bores’ support for stricter AI rules has drawn opposition from a super PAC called Leading the Future. A super PAC is a political fundraising group that can spend large amounts on ads, texts, and mailers. WIRED reports the group is backed by people including OpenAI cofounder Greg Brockman, Palantir cofounder Joe Lonsdale, and venture firm Andreessen Horowitz.
Bores told WIRED the attacks include repeated text messages and printed mailers sent to voters, including to him personally. The super PAC has argued that laws like the RAISE Act could limit AI-related jobs and innovation.
AI rules are still limited at the national level, so states are trying their own approaches. This race is an early test of whether tech industry donors can use heavy political spending to discourage lawmakers from pushing stricter AI safety rules.
Source: Wired