More people are turning to mental health AI chatbots. What could go wrong?

More people are turning to mental health AI chatbots. What could go wrong?

In 2022, Estelle Smith, a computer science researcher, was frequently combating intrusive thoughts. Her professional therapist wasn’t the right fit, she felt, and couldn’t help her. So she turned to a mental health chatbot called Woebot. 

Woebot refused to address Smith’s direct suicidal prompts and asked her to seek professional help. However, when she texted it a real thought she often battles as an avid rock climber: climbing and jumping off a cliff, it encouraged her to do it and said it was “wonderful” she was taking care of her mental and physical health. 

“I wonder what might have happened,” Smith told National Geographic, “if I had been standing on a cliff at that exact moment when I got the response.” 

Mental health chatbots are far from a new phenomenon. Over half a century back, an MIT computer scientist built a crude computer program called ELIZA that could respond like a Rogerian psychotherapist. Since then, efforts to develop digital therapy alternatives have only accelerated and for good reason. WHO estimates there’s a global median of 13 mental health workers per 100,000 people. The Covid-19 pandemic launched a crisis, triggering tens of millions of additional depression and anxiety cases. 

In the US alone, over half of the adults suffering from a mental illness do not receive treatment. The majority of them cite cost and stigma as their top obstacles. Could virtual solutions, which are affordable and available 24/7, help overcome them? 

Chatbots replace talk therapy

The accessibility and scalability of digital platforms can significantly lower barriers to mental health care and make it available to a broader population, said Nicholas Jacobson, who researches the use of tech to enhance the assessment and treatment of anxiety and depression at Dartmouth College. 

Swept up by a wave of Generative AI, tech companies have been quick to capitalize. Scores of new apps like WHO’s “digital health worker,” “Sarah” offer automated counseling, where people can engage in cognitive behavioral therapy sessions—a psychotherapeutic treatment that’s proven to assist users in identifying and changing negative thought patterns—with an AI chatbot.

The arrival of AI, Jacobson adds, will enable adaptive interventions and allow healthcare providers to continuously monitor patients, anticipate when someone may need support, and deliver treatments to alleviate symptoms. 

It’s not anecdotal either: A systematic review of mental health chatbots found AI chatbots could dramatically cut down symptoms of depression and distress, at least in the short term. Another study used AI to analyze more than 20 million text conversations from real counseling sessions and successfully predicted patient satisfaction and clinical outcomes. Similarly, other studies have been able to detect early signs of major depressive disorder from unguarded facial expressions captured during routine phone unlocks and people’s typing patterns. 

Most recently, Northwestern University researchers devised a way to identify suicidal behaviour and thoughts without psychiatric records or neural measures. Their AI model estimated self-harm likelihood in 92 out of 100 cases based on data from simple questionnaire responses and behavioral signals like ranking a random sequence of pictures on a seven-point like-to-dislike scale from 4,019 participants. 

Two of the study’s authors, Aggelos Katsaggelos and Shamal Lalvani expect—once the model clears clinical trials—specialists to use it for support, such as scheduling patients depending on perceived urgency and eventually, roll out to the public in at-home settings. 

But as was evident in Smith’s experience, experts urge caution over treating tech solutions as the panacea since they lack the skill, training, and experience of human therapists, especially Generative AI, which can be unpredictable, make up information, and regurgitate biases. 

Where artificial intelligence falls short 

When Richard Lewis, a Bristol-based counselor and psychotherapist, tried Woebot—a popular script-based mental health chatbot that can only be accessed via a partner healthcare provider—to help a topic he was also exploring with his therapist, the bot failed to pick up on the issue’s nuances, suggested he “stick to the facts,” while removing all the emotional content from his replies, and advised reframing his negative thoughts as a positive. 

“As a therapist,” Lewis said, correcting or erasing emotions is the “last thing I would want a client to feel and the last thing I would ever suggest.”

“Our job is to form a relationship that can hold difficult emotions,” Lewis added, “and feelings for our clients to make it easier for them to explore, integrate, or find meaning in them and ultimately know themselves better.”

I had a similar experience on Earkick, a freemium Generative AI chatbot that claims to “improve your mental health in real-time” and has “tens of thousands” of active users. When I told it I was feeling overwhelmed by mounting deadlines, it was quick to suggest solutions like hobbies. 

Earkick’s co-founder and COO, Karin Stephan, said the app is not trying to compete with humans and instead, wants to serve people in a manner that makes them more likely to accept help. 

How bots and people can work together

Most therapists agree AI apps can be an ideal first step in a person’s mental health journey. The problem lies when they’re treated as the only solution. While people like Smith and Lewis had existing human support systems, the consequences can be dire when someone exclusively depends on an AI chatbot. Last year, a Belgian man committed suicide after a chatbot encouraged him to. Similarly, the National Eating Disorders Association (NEDA) suspended an eating disorder chatbot, Tessa, as it was dispensing harmful dieting advice. 

Ellen Fitzsimmons-Craft, a psychologist and professor who helped develop Tessa, agrees AI tools could make the idea of mental health care less scary but adds they must be made safe, held to high standards, and regulated. Like ChatGPT, she said, they shouldn’t be trained on the whole internet, where plenty of bad advice exists. Studies have discovered AI chatbots not only regurgitated racist medical tropes but also failed to work altogether when applied to, say, Black Americans. 

Until tech companies overcome these concerns, said Rob Morris, the co-founder of Koko Cares, which offers free mental health resources and peer support, AI’s best near-term use cases will be for administrative purposes like insurance and billing, ultimately allowing therapists to spend more time with clients. 

Koko faced public outrage when it added the ability to co-write messages with ChatGPT and had to backtrack. When offered an option to have AI-in-the-loop, most of its users preferred a strictly human experience and opted out. In the last six months, more than 2,000,000 people have used Koko. 

“People in distress are not problems to fix,” Lewis said, “they are complex people to be seen, heard, and cared for. It is as simple as that.”

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : National Geographic – https://www.nationalgeographic.com/science/article/ai-chatbots-treatment-mental-health

Exit mobile version