324
Audio & Video Production314
Software Development229
Automation & Workflow208
Writing & Content Creation190
Marketing & Growth179
AI Infrastructure & MLOps150
Design & Creative154
Photography & Imaging146
Data & Analytics115
Voice & Speech123
Education & Learning120
Sales & Outreach114
Customer Support112
Research & Analysis86
LinkedIn cofounder Reid Hoffman says not using top AI chatbots as a second opinion is close to malpractice, despite research warning about wrong advice.
In short: LinkedIn cofounder Reid Hoffman said doctors who do not use advanced AI chatbots as a second opinion are “bordering on committing malpractice.”
Reid Hoffman, who cofounded LinkedIn and has served on boards including OpenAI, spoke at WIRED Health in London on April 16. He argued that “frontier models” are useful in health care. Frontier models are the most advanced general purpose AI systems from companies such as OpenAI and Anthropic.
Hoffman said these chatbots can act like an extra set of eyes. He compared their value to getting a second opinion from another doctor, since they can quickly scan and summarize large amounts of written information (like having a very fast research assistant).
He also said he uses these tools for his own health questions, and that his concierge doctors do too. Hoffman suggested the UK’s National Health Service could eventually offer a free “medical assistant” on every smartphone to help with early triage, which means sorting who needs what kind of care first.
Hoffman is also building a health related startup called Manas AI. The company is working on using AI to speed up drug discovery for cancers, which is the long process of finding and testing new medicines. He said human experts still review the AI’s suggestions and discard ideas that do not make sense.
Many people already ask chatbots about symptoms, test results, and treatments. But a recent study found that large language models can give inaccurate and inconsistent medical advice. Hoffman’s comments highlight a growing debate, whether AI should be used inside health care systems as a safety check, or treated as too risky when lives are involved.
Source: Wired