351
Audio & Video Production340
Software Development246
Automation & Workflow218
Writing & Content Creation203
Marketing & Growth188
Design & Creative169
AI Infrastructure & MLOps168
Photography & Imaging155
Voice & Speech133
Data & Analytics133
Education & Learning128
Customer Support123
Sales & Outreach122
Research & Analysis95
Barry Diller defended OpenAI CEO Sam Altman, but warned that more powerful AI could bring surprises and needs clear guardrails, he said.
In short: Barry Diller said he trusts OpenAI CEO Sam Altman, but he believes trust matters less than preparing for unpredictable risks as more powerful AI gets closer.
Barry Diller, a longtime media executive and chairman of IAC and Expedia Group, spoke this week at The Wall Street Journal’s “Future of Everything” conference. He was asked whether people should put their faith in Sam Altman to make sure artificial intelligence helps humanity.
Diller defended Altman’s character. He said Altman is sincere, “a decent person with good values,” and not untrustworthy, even as some former colleagues and board members have accused Altman of being manipulative or deceptive.
But Diller said the bigger issue is not whether the leaders are trustworthy. He said the bigger issue is that even the people building advanced AI can be surprised by what it does. He described AI progress as “the great unknown,” saying, “We don’t know. They don’t know.”
Diller focused on Artificial General Intelligence, or AGI, which is a theoretical kind of AI that could one day outperform humans at almost any task (like having a single “super worker” that can do every job better than a person). He said we are not at AGI yet, but he believes we are getting closer.
He argued that society needs “guardrails,” meaning clear rules and limits, before something like AGI arrives. He warned that if humans do not set those guardrails, an “AGI force” might set its own, and “there’s no going back,” he said.
Diller’s comments reflect a common worry about powerful AI, that good intentions may not be enough if the tools behave in unexpected ways. For regular people, “guardrails” can mean clearer safety rules, oversight, and accountability before AI is used in more parts of daily life.
Source: TechCrunch AI