Silicon Valley wants to deploy AI nursebots to handle your care

Medical startup Hippocratic AI and Nvidia say it’s all about the chatbots’ ’empathy inference.’

By

Andrew Paul

|

Published Mar 19, 2024 2:30 PM EDT

Woman talking with nurse chatbot on iPad

Hippocratic AI is using Nvidia GPUs to power its nurse chatbot avatars. Nvidia / Hippocratic AI / YouTube

The medical startup Hippocratic AI and Nvidia have announced plans to deploy voice-based “AI healthcare agents.” In demonstration videos provided Monday, at-home patients are depicted conversing with animated human avatar chatbots on tablet and smartphone screens. Examples include a post-op appendectomy screening, as well as a chatbot instructing someone on how to inject penicillin. Hippocratic’s web page suggests providers could soon simply purchase its nursebots for less than $9/hour to handle such tasks, instead of paying an actual registered nurse $90/hour, Hippocratic claims. (The average pay for a registered nurse in the US is $38.74/hour, according to a 2022 U.S. Bureau of Labor Statistics’ occupational employment statistics survey.)

A patient’s trust in AI apparently is all about a program’s “seamless, personalized, and conversational” tone, said Munjal Shah, Hippocratic AI co-founder and CEO, in the company’s March 18 statement. Based on their internal research, people’s ability to “emotionally connect” with an AI healthcare agent reportedly increases “by 5-10% or more” for every half-second of conversational speed improvement, dubbed Hippocratic’s “empathy inference” engine. But quickly simulating all that worthwhile humanity requires a lot of computing power—hence Hippocratic’s investment in countless Nvidia H100 Tensor Core GPUs.

“Voice-based digital agents powered by generative AI can usher in an age of abundance in healthcare, but only if the technology responds to patients as a human would,” said Kimberly Powell, Nvidia’s VP of Healthcare, said on Monday. 

[Related: Will we ever be able to trust health advice from an AI?]

But an H100 GPU-fueled nurse-droid’s capacity to spew medical advice nearly as fast as an overworked healthcare worker is only as good as its accuracy and bedside manner. Hippocratic says it’s also got that covered, of course, and cites internal surveys and beta testing of over 5,500 nurses and doctors voicing overwhelming satisfaction with the AI as proof. When it comes to its ability to avoid AI’s (well documented) racial, gendered, and age-based biases, however, testing is apparently still underway. And in terms of where Hippocratic’s LLM derived its diagnostic and conversational information—well, that’s even vaguer than their mostly anonymous polled humans.

In the company’s white paper detailing Polaris, its “Safety-focused LLM Constellation Architecture for Healthcare,” Hippocratic AI researchers say their model is trained “on a massive collection of proprietary data including clinical care plans, healthcare regulatory documents, medical manuals, drug databases, and other high-quality medical reasoning documents.” And that’s about it for any info on that front. PopSci has reached out to Hippocratic for more specifics, as well as whether or not patient medical info will be used in future training.

In the meantime, it’s currently unclear when healthcare companies (or, say, Amazon, for that matter) can “augment their human staff” with “empathy inference” AI nurses, as Hippocratic advertises. The company did note it’s already working with over 40 “beta partners” to test AI healthcare agents on a wide gamut of responsibilities, including chronic care management, wellness coaching, health risk assessments, pre-op outreach, and post-discharge follow-ups.

It’s hard to envision a majority of people ever preferring to talk with uncanny chat avatars instead of trained, emotionally invested, properly compensated healthcare workers. But that’s not necessarily the point here. The global nursing shortage remains dire, with recent estimates pointing to a shortage of 15 million health workers by 2030. Instead of addressing the working conditions and wage concerns that led unions representing roughly 32,000 nurses to strike in 2023, Hippocratic claims its supposed cost-effective AI solution is the “only scalable way” to close the shortfall gap—a scalability reliant on Nvidia’s H100 GPU.

The H100 is what helped make Nvidia one of the world’s most lucrative, multi trillion-dollar companies, and the chips still support many large language model (LLM) AI supercomputer systems. That said, it’s now technically Nvidia’s third most-powerful offering, following last year’s GH200 Grass Hopper Super Chip, as well as yesterday’s simultaneous reveal of a forthcoming Blackwell B200 GPU. Still, at roughly $30,000-to-$40,000 per chip, the H100’s price tag is reserved for the sorts of projects valued at half-a-billion dollars–projects like Hippocratic AI.

But before jumping at the potential savings that an AI labor workaround could provide the healthcare industry, it’s worth considering these bots’ energy costs. For reference, a single H100 GPU requires as much power per day as the average American household.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Popular Science – https://www.popsci.com/technology/ai-nurse-chatbots-nvidia/

Exit mobile version