Training AI for Health Care with a Focus on ‘Do No Harm’

In November 2022, OpenAI launched ChatGPT, showcasing the capabilities of large language models (LLMs) powered by neural networks. The rapid improvements in this artificial intelligence technology have entrepreneurs dreaming up new applications, including Munjal Shah and his health care startup Hippocratic AI. Shah aims to leverage LLMs to help address staffing shortages among health care workers while focusing strictly on non-dihealthcareplications to avoid potential patient harm.

LLMs’ ability to absorb research and communicate healthcare effectively makes them well-suited for specific healthcare roles. For example, they could provide chronic care support, patient navigation services, dietitian guidance, and other crucial but non-clinicalhealthcareWith infinite time and patience, an AI assistant could develop solid relationships by letting patients fully explain their situations without interruption, following up with helpful reminders, and answering questions – thus demonstrating “bedside manner.”

Of course, accuracy and proper training are still essential to ensure AI causes no harm. Hippocratic AI trains its LLM on peer-reviewed medical literature and clinical guidelines rather than the broader internet content used by general chatbots like ChatGPT. The system also undergoes analysis by medical professionals in a reinforcement learning technique involving human feedback. So far, testing shows Hippocratic AI outperforming GPT-4 on most medical certification exams, suggesting promising capability.

Maintaining Focus on the Human Element

While optimistic about AI’s potential, Munjal Shah maintains a cautious and ethical approach. Hippocratic AI aims only to fill non-diagnostic roles where human healthcare workers are overburdened, and patients are underserved. It is not meant to replace human clinical judgment. Shah believes AI is ready for some tasks but not others. For example, chronic care support seems well suited for AI assistance. In contrast, diagnosing illness or creating specialized treatment plans should remain solely the province of human experts.

Partnership Over Replacement

Rather than framing AI as competing with doctors and nurses, Shah advocates viewing it as a resource to make their jobs more manageable. Offloading time-consuming administrative tasks and fundamental patient interactions to AI assistants could free human healthcare workers to focus on the interpersonal and diagnostic aspects of care best suited to their training and abilities. AI might handle routine check-ins, reminders, and fundamental questions, enabling clinicians to prioritize critical thinking and compassion.

Training AI Ethically

While Hippocratic AI’s early testing results seem promising, Shah stresses that pushing AI too fast or without enough medical oversight could put patients at risk. He argues for an ethical, transparent development process grounded in evidence-based practice guidelines. AI should enhance health care without being positioned to make unsupported diagnostic or treatment recommendations. Shah also advocates developing industry standards and best practices as more startups begin exploring healthcare applications for AI.

The founder and CEO bring their dual expertise to the challenge, with backgrounds in medical tech and running AI companies. This allows Hippocratic AI to tap the strengths of LLMs while designing guardrails to prevent patient harm. Whether AI will live up to its lofty health care potential remains uncertain. But under Munjal Shah’s leadership, Hippocratic AI aims to expand access and quality of care while heeding the ancient healthcare of “first, not harm.”