Researchers from the University of Westminster; Kinsey Institute at Indiana University and Positive East looked at resources from the UK’s National Health Service and the World Health Organization to develop their community-driven approach for increasing inclusivity, acceptability and engagement with artificial intelligence chatbots.
WHY IT MATTERS
Aiming to identify activities that reduce bias in conversational AI and make their designs and implementation more equitable, researchers looked at several frameworks for evaluating and implementing new healthcare technologies, including the Consolidated Framework for Implementation Research updated in 2022.
When they found that frameworks lacked guidance for handling unique challenges associated with conversational AI technologies – data security and governance, ethical concerns and the need for diverse training datasets – they followed content analysis with a draft conceptual framework and consulted stakeholders.
The researchers interviewed 33 key stakeholders from diverse backgrounds, including 10 community members, doctors, developers, and mental health nurses with expertise in reproductive health, sexual health, AI and robotics and clinical safety, they said.
Using the framework method to analyze qualitative data from the interviews to develop their 10-step roadmap, Achieving health equity through conversational AI: A roadmap for design and implementation of inclusive chatbots in healthcare, published Thursday in PLOS Digital Health,
The report guides 10 stages of AI chatbot development, beginning with concept and planning, including safety measures, structure for preliminary testing, governance for healthcare integration and auditing and maintenance and ending with termination.
The inclusive approach, according to Dr Tomasz Nadarzynski, who led the study at the University of Westminster, is crucial for mitigating biases, fostering trust and maximizing outcomes for marginalized populations.
“The development of AI tools must go beyond just ensuring effectiveness and safety standards,” he said in a statement.
“Conversational AI should be designed to address specific illnesses or conditions that disproportionately affect minoritized populations due to factors such as age, ethnicity, religion, sex, gender identity, sexual orientation, socioeconomic status or disability,” the researchers said.
Stakeholders stressed the importance of identifying public health disparities that conversational AI can help mitigate. They said that from the outset, as part of initial needs assessments – performed before tools are created.
“Designers should define and set behavioral and health outcomes that conversational AI is aiming to influence or change,” according to researchers.
Stakeholders also said that conversational AI chatbots should be integrated into healthcare settings, designed with diverse input from the communities they intend to serve and made highly visible. They should ensure accuracy with confidence and protected data safety and be tested by patient groups and diverse communities.
Health AI chatbots should also be regularly updated with the latest clinical, medical and technical advancements, monitored – incorporating user feedback – and be evaluated for their impact on healthcare services and staff workloads, according to the study.
Stakeholders also said that the use of chatbots to expand healthcare access must be implemented in existing care pathways, and “not be designed to function as a standalone service,” and may require tailoring to align with local needs.
THE LARGER TREND
Money-saving AI chatbots in healthcare were predicted to be a crawl-walk-run endeavor, where easier tasks have moved to chatbots while the technology advanced enough to handle more complex tasks.
Since ChatGPT made conversational AI available to every sector at the end of 2022, healthcare IT developers have cranked up testing it to surface information, improve communications and make shorter work of administrative tasks.
Last year, UNC Health piloted an internal generative AI chatbot tool with a small group of clinicians and administrators to enable staff to spend more time with patients and less time in front of a computer. Many other provider organizations now use generative AI in their operations.
AI is being used in patient scheduling and with patients post-discharge to help reduce hospital readmissions and drive down social health inequalities.
But, trust is critical for AI chatbots in healthcare, according to healthcare leaders and they must be scrupulously developed.
“You have to have a human at the end somewhere,” said Kathleen Mazza, clinical informatics consultant at Northwell Health, during a panel session at the HIMSS24 Virtual Care Forum.
“You’re not selling shoes to people online. This is healthcare.”
ON THE RECORD
“We have a responsibility to harness the power of ‘AI for good’ and direct it towards addressing pressing societal challenges like health inequities,” Nadarzynski said in a statement.
“To do this, we need a paradigm shift in how AI is created – one that emphasizes co-production with diverse communities throughout the entire lifecycle, from design to deployment.”
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Healthcare IT News – https://www.healthcareitnews.com/news/roadmap-designing-more-inclusive-health-chatbots