321
Audio & Video Production298
Software Development231
Automation & Workflow200
Writing & Content Creation183
Marketing & Growth175
AI Infrastructure & MLOps143
Design & Creative144
Data & Analytics109
Photography & Imaging139
Voice & Speech122
Customer Support113
Sales & Outreach108
Education & Learning116
Operations & Admin79
A Financial Times columnist argues governments are moving too slowly to regulate AI for safety, leaving big labs to shape outcomes.
In short: A Financial Times columnist argues that most countries are unlikely to regulate AI mainly for safety, at least in the near term.
A new Financial Times column says AI is moving faster than governments can respond. The writer argues that, in practice, a small number of AI labs will keep making the biggest decisions about how the technology is built and used.
The column points to past examples where regulation came late. It compares AI to problems like carbon dioxide emissions and smoking, where harms were known for a long time before strong rules arrived. It also notes that the US, where much AI development happens, has not passed a major federal AI law.
The piece says public pressure is also limited. It points out that many people use AI as a consumer product, but do not spend much time thinking about wider effects on jobs, politics, or safety. It also claims AI is hard for outsiders to judge because many systems are “closed-source” (meaning the code is not shared publicly, like a locked appliance you cannot open), and even some builders say they do not fully understand how their models reach answers.
The column suggests the biggest near-term action will come from a few places, such as the EU’s AI Act, and from company policies rather than broad global rules. For regular people, the practical question is which AI tools get released, who gets access, and what safeguards are built in, since those choices may matter more than new laws for now.
Source: Financial Times