01/10/2026
Increasingly, patients are arriving to clinic having already queried an AI or large language model about their diagnosis. These tools are improving rapidly. Compared with even a year ago, they are already far more capable and, in many cases, genuinely helpful in explaining medical concepts, summarizing evidence, and framing questions to ask your clinicians.
This trend will only accelerate. The models will continue to improve, and it is widely expected that within a few years they will surpass any individual clinician in breadth of medical knowledge đź’Ą
That said, important caveats remain. Most publicly available AI tools are not HIPAA-compliant, meaning personal health information may not be protected. And despite their sophistication, they can still generate incorrect or fabricated information (“hallucinations”).
Some practical ways patients can use these tools more safely and effectively:
📍Use AI to learn general concepts, not to replace personalized medical advice or make treatment decisions on your own.
📍Avoid entering identifiable health information (names, dates of birth, medical record numbers, full reports).
📍Ask AI to explain terms, tests, or diagnoses in plain language, or to summarize publicly available guidelines or studies.
📍Use it to prepare better questions for your physician or to help organize thoughts before an appointment.
📍Treat AI outputs as starting points, not conclusions—always verify important information with your care team.
📍Be especially cautious with recommendations involving medications, supplements, dosing, or treatment changes.
Used thoughtfully, these tools can be powerful, helping patients become more informed and engaged. But they work best when paired with clinical judgment, real-world context, and a strong practitioner-patient relationship.
We asked experts the potential risks and benefits of turning over your health data to an AI tool.