355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Ontario’s auditor general tested 20 approved AI medical note tools and found errors, missing details, and made-up information that could affect care.
In short: An Ontario audit found that government-approved AI tools that write doctors’ notes often produced incorrect, incomplete, or made-up information.
Ontario’s auditor general reviewed “AI medical scribes,” tools that listen to a doctor and patient conversation and then draft a medical note (like a fast assistant who writes the visit summary). The audit looked at tests using two simulated doctor visits and ran them across 20 vendors that the provincial government had approved for health care providers to buy.
All 20 tools had accuracy or completeness problems in at least one test. Nine made up patient details, which the report describes as “hallucinated” information (when an AI confidently invents something that was not said). Twelve recorded information incorrectly, and 17 missed key details about mental health issues discussed in the conversations.
The report gave examples of mistakes that could affect care. Some notes included made-up referrals for blood tests or therapy. Others got prescription medication names wrong, or left out important mental health information.
The audit also criticized how these tools were approved. Across vendors, the average score for the “accuracy of medical notes generated” section was 12 out of 20. But accuracy counted for only about 4 percent of the overall score, meaning a tool could still pass even with a very low accuracy score.
Doctors’ notes guide what happens next, like a shared checklist for future visits. If an AI note adds a referral that never happened, or changes a medication name, it can confuse patients and clinicians and could lead to the wrong follow-up care. The auditor general recommended that systems require doctors to confirm they reviewed AI-written notes before they are saved to a patient’s record.
Source: Arstechnica