In short: AI companies are investing heavily in growth, but regulators and the public are pushing harder for fixes to bias, privacy risks, and child safety.
Public concerns about AI systems are piling up. People worry about unfair results, like biased decisions in hiring, loans, housing, and healthcare. They also worry about AI tools producing harmful or misleading answers for children, and about deepfakes (fake photos, videos, or audio that look real) being used for scams.
Governments are responding with tougher rules and more enforcement. In the EU, the second phase of the EU AI Act starts in August 2026 and adds requirements for “high-risk” AI systems, meaning systems used in important areas like jobs or housing. In the US, states are moving too, including California rules that require notice before using automated tools for major decisions and Colorado’s AI Act, which requires checks for discrimination.
Enforcement is rising as well. In December 2025, 42 US state attorneys general told AI companies to add child safeguards, warning that current products could break existing laws. Some actions have come only after complaints or lawsuits, like Pennsylvania’s 2025 settlement tied to housing repairs delayed by AI.
AI companies are still planning very large spending on data centers and computing in 2026, while trust is not rising at the same pace. Watch whether companies build protections before problems happen, or mostly after regulators step in. Also watch a federal push to limit state-by-state AI rules, which could reduce pressure on companies in some areas.
Source: Financial Times
12
Software Development17
Data & Analytics6
Audio & Video Production5
Productivity & Workflow10
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery7
Email & Communication5
Art & Illustration3
Customer Support1
HR & Recruiting2
Writing & Content Creation3