HealthcareNews

Medical AI chatbots confidently give dangerously wrong health advice in tests

·2 min read
Medical stethoscope and laptop on a white desk, symbolizing digital health solutions.

Medical AI chatbots from Microsoft, Google, OpenAI, and Anthropic are giving dangerously inaccurate health advice in recent tests. A Nature Medicine study found users correctly identified medical conditions only 33% of the time when consulting AI, and made correct treatment decisions just 43% of the time. Separate research shows all five major free AI models recommend meal plans for teens with calorie levels so low they could stunt growth.

Why it matters

Over 40 million people now consult ChatGPT daily for health information, according to OpenAI, as public trust in federal health agencies dropped 5-7% in the past year per Annenberg polling. Enterprise health AI deployments carry significant liability risks when 63% of users already consider AI-generated health information reliable despite proven accuracy problems. Microsoft's new Copilot Health tool combining medical records with fitness data escalates both privacy exposure and potential harm from AI hallucinations.

What to do

Immediately audit any health-related AI implementations for liability exposure and establish clear disclaimers that AI cannot replace licensed medical professionals. Implement human-in-the-loop review for all AI-generated health guidance and restrict employee health data sharing with third-party AI tools until your legal team reviews privacy implications.

Enterprise AI