
09/24/2025
Most healthcare providers have had the experience of reeducating patients who have Googled their symptoms and arrived at their own (false) diagnosis. Now, providers have AI chatbots to contend with, which have been shown to be highly vulnerable to repeating and elaborating on medical misinformation. While the study underscored the need for stronger safeguards, its lead author noted that generative AI, in the right hands, still holds major promise for reducing clinician workload and improving patient care.
Mount Sinai researchers found that popular AI chatbots like ChatGPT and DeepSeek R1 can generate convincing but false medical information when given even a single fabricated term in a prompt. While the study underscored the need for stronger safeguards, its lead author noted that generative AI still...