11/06/2024
There has been a lot of discussion lately about AI, particularly LLMs, in healthcare. We're often led to believe there's nothing they can't do. However, many of our friends remain skeptical, raising valid points such as "They're not always good with calculations" or "They're probability machines and can't make medical decisions!" We acknowledge these concerns and agree that we're still some way off from allowing AI to make medical decisions or provide medical advice autonomously, without human oversight. Nonetheless, this doesn't mean that they aren't already incredibly useful.
LLMs excel at summarizing and presenting information in a way that is easy for humans to digest, which is a significant need in healthcare. They also support the use of tools, allowing us to assist them with tasks they're less proficient at, like solving complex math problems or tailoring medical data to fit specific needs and requirements.
Over the past few days, we've experimented with our own diabetes dataset (its Lukas Schuster's data) and developed a prototype that we believe extends the capabilities of LLMs in diabetes care. By augmenting LLMs with tools for data access in a privacy-safe manner, and ensuring accurate calculations and precise metric definitions, we can achieve impressive results.
We are excited to extend their use to make data more accessible for us. We should leverage their strengths, augment context information, and incorporate useful tools so that we can have greater trust in their outputs.