328
Audio & Video Production325
Software Development242
Automation & Workflow211
Marketing & Growth197
AI Infrastructure & MLOps151
Writing & Content Creation201
Data & Analytics124
Customer Support125
Design & Creative150
Sales & Outreach117
Photography & Imaging142
Voice & Speech131
Operations & Admin89
Education & Learning121
Ben Riley says AI chatbots can confidently repeat wrong medical claims, after his father used an AI tool to question his oncologist’s plan.
In short: An education researcher who has long criticized AI chatbots says a personal family experience showed how AI can reinforce dangerous misunderstandings, especially in medical decisions.
Ben Riley, an education researcher and the founder of Deans for Impact, has spent years warning that AI chatbots have limits in classrooms. He has argued that learning usually requires effort, and that tools promising easy shortcuts can get in the way.
Riley also says many people trust chatbot answers too quickly. He points out that large language models, the type of AI behind many chatbots, mainly work by predicting the next word in a sentence. They can sound confident even when they are wrong, like a student who writes a smooth sounding book report without understanding the book.
Those concerns became personal when Riley’s father, who had lung cancer, kidney disease, and Chronic Lymphocytic Leukemia (CLL), used the AI search tool Perplexity to self-diagnose. The tool’s response led him to believe he had Richter’s Transformation, a rare complication linked to CLL. Based on that output, he refused the treatment his oncologist recommended, known as Ven-Obi, and sent the AI report to his doctor.
Riley then contacted researchers whose studies the AI had cited. According to the report, two doctors said the AI had misstated the conclusions of their research and that Riley’s father should follow his oncologist’s plan.
Riley says he does not believe AI directly caused his father’s death. But he argues AI can affirm or amplify a person’s fears, especially when someone is vulnerable or worried about treatment. A key question going forward is how AI tools will prevent medical sounding answers from being taken as medical advice.
Source: NYTimes