Training AI for Health Care Without Harm – Munjal Shah’s Mission with Hippocratic AI

In the year since ChatGPT launched, large language models (LLMs) powered by neural networks have demonstrated impressive capabilities in comprehension and communication. Seeing the potential, serial entrepreneur Munjal Shah is leveraging LLMs to address massive healthcare staffing shortages with his new company, Hippocratic AI.

Drawing its name from the medical oath to “first, not harm,” Hippocratic AI focuses specifically on non-diagnostic applications. Rather than determine diagnoses or treatment plans, the technology provides supportive care to relieve overwhelmed human providers. Chronic care management, patient navigation services, and dietitian consultations could benefit from AI assistive services, freeing human capacity.

A key advantage of LLMs is absorbing research and communicating it conversationally, which Shah calls “bedside manner with a capital B.” Studies show physicians often cut patients’ stories short. Still, LLMs have endless patience in building relationships. Hippocratic AI could call patients to remind them of critical self-care steps post-discharge rather than just hand them written instructions. Shah argues we must rethink patient interactions given the tremendous capacity LLMs offer.

Surprisingly, emotionless AI can outperform humans in empathetic communication due to a lack of burnout. The everyday emotional strain on providers negatively impacts bedside manner over time. Mimicking empathy without experiencing exhaustion, AI could consistently provide needed empathy to patients.

Still, accuracy is paramount, so “no harm” is more than a namesake. Mistakes in medical communication, while not diagnostic, negatively impact outcomes. Rigorous training explicitly focused on peer-reviewed medical literature is critical. Rather than a broad internet crawl, Hippocratic AI uses textbooks, research, and healthcare standards. Human provider feedback further refines responses.

So far, Hippocratic AI has outscored other LLMs on 114 medical exams and bedside manner benchmarks. With responsible development, AI promises to expand access to supportive care. Shah believes if deployed carefully, LLMs can build trusting patient-provider relationships and improve outcomes by reducing the unsustainable demands on human clinicians today.

What is your reaction?

In Love
Not Sure

You may also like

Comments are closed.

More in:Health