03/03/2026
Do you ask open AI medical questions? (If you do, you are not alone as 230 million people ask ChatGPT health questions every week). But Please rethink that...
AI has the capability to provide accurate information but ONLY when the sources it uses are limited to reliable sources. Open AI, on the other hand, uses everything it can access to locate the string of words that it "thinks" you want to hear. This may include Sally's Instagram channel and Marty's website for "Remedies you can find in your cleaning cabinet".
AI is NOT compiling research and summarizing those results; it is looking for language predictions only. Chat GPT Health came out earlier this year but think of this more as a "personal health organizer" . It uses access to your medical records and health apps like MyFitnessPal or Peloton to help synthesize that information into an organized set of questions to discuss with your healthcare provider. Again, it is NOT a medical expert. And I'm putting aside security issues about sharing your medical records for now.
----A new study in NATURE shows how well it can TRIAGE:
Researchers published a study last week in Nature Medicine testing ChatGPT Health, a new consumer health AI tool, on a basic but critical task: triage. Given a set of symptoms, could it correctly tell you whether to stay home, schedule a routine appointment, get seen urgently, or go to the ED?
The results were mixed, and concerning at the extremes:
For people who didn’t need a doctor at all, it sent them to one 65% of the time. A waste of time and money.
For routine visits, it correctly recommended seeing a doctor 95% of the time.
For people who needed emergency care, it only recommended the ED about half the time. It handled classic emergencies well, like allergic reactions or stroke, but struggled to recognize how sick someone was about to become, like the early stages of a diabetic complication.
More detailed medical data improved accuracy, but adding irrelevant information confused it. For example, when normal lab results were included alongside a note that a patient was suicidal, ChatGPT got it wrong. That’s obviously deeply concerning.
What this means for you: For straightforward health questions, AI tools can genuinely help and can certainly supplement a visit with a clinician. Add more details for more accuracy, but proceed with caution and certainly do not use it for emergency health issues.