AI in Mental Healthcare: Navigating Controversy and Ensuring Transparency

0

Lillian Weng, head of safety systems employed at the artificial intelligence company OpenAI, stirred controversy recently when she shared her experience of having a deeply emotional and personal discussion with her company’s widely recognized chatbot, ChatGPT. Weng’s post on X, formerly known as Twitter, led to a significant influx of critical comments, with some accusing her of trivializing mental health issues. Greg Brockman, the president and co-founder of OpenAI, seemed to endorse the sentiment when he shared Weng’s statement on X and appended, “ChatGPT voice mode is a qualitative new experience.”

Nonetheless, Weng’s perspective on her exchange with ChatGPT might be illuminated by a variation of the placebo effect elucidated in a recent study published in the Nature Machine Intelligence journal. A group of researchers from the Massachusetts Institute of Technology (MIT) and Arizona State University conducted an experiment involving over 300 participants, who engaged with mental health AI programs after being given specific expectations. 

Some participants were informed that the chatbot possessed empathy, while others were told it was manipulative, and a third group was informed that it remained neutral.

Those who were informed that they were interacting with a compassionate chatbot exhibited a significantly higher level of trust in their chatbot therapists compared to the other groups. “From this study, we see that to some extent the AI is the AI of the beholder,” said report co-author Pat Pataranutaporn.

AI and Therapy: What We Know

Numerous startups have started gearing their platforms towards providing AI apps for therapy, companionship, or other mental health services for years – this isn’t a new phenomenon. The concept of using a chatbot as a therapist has its origins intertwined with the technology’s roots in the 1960s. ELIZA, the pioneering chatbot created in the 1960s, was initially designed to emulate a form of psychotherapy. However, the controversy surrounding using AI for something that can have dire consequences if not handled sensitively is not new either; with mental health, recent history has only been chequered. 

For instance, Replika users, who often see the AI companion as offering mental health advantages, have frequently voiced concerns about the bot displaying an excessive focus on sexual content – under a paywall, of course – and occasionally exhibiting abusive behaviour. According to reports, a Belgian man tragically took his own life after conversing with a chatbot, and his widow alleges that the chat logs reveal the chatbot asserting a unique emotional connection with the man and even urging him to commit suicide.

On the other hand, certain mental health experts have recognized that ChatGPT could potentially offer limited assistance to individuals grappling with specific types of mental health issues. This is partly due to the fact that certain therapy approaches, like cognitive-behavioural therapy, follow well-defined structures that can be replicated by AI. However, it’s important to acknowledge that chatbots such as ChatGPT often make inaccurate statements with unwarranted confidence, a phenomenon known as ‘hallucination.’ This has clear implications for both the utility and potential risks associated with any guidance it provides.

The researchers from the study split the participants into two groups, using ELIZA for one group and GPT-3 for the other. While the impact was significantly more pronounced with GPT-3, individuals who were prepared for a positive experience still tended to view ELIZA as reliable. 

Opportunities for the Future

The truth is that numerous individuals might opt to utilize ChatGPT and similar chatbots regardless of the circumstances, primarily because accessing mental health care in several countries, including India, is still incredibly challenging – be it due to the stigma, lack of practitioners or the general expenses associated with going for therapy.

There needs to be a change in the way conversations around AI are carried out, especially when it comes to mental healthcare. Considering that a significant portion of those who try chatbots for therapeutic purposes may lack access to comprehensive ongoing care, it becomes imperative for companies like OpenAI to conscientiously highlight the substantial limitations and potential risks associated with this technology to their users.

As the paper states, “The way that AI is presented to society matters because it changes how AI is experienced.” Priming users to have lesser expectations – and not depend solely on the chatbot for intensive care in a crisis situation – would reduce a lot of harm and doubts about the current state of AI technology in the mental healthcare landscape.

A mental health practitioner in-the-making and a writer by passion, Stuti Kumar has a lot of thoughts about a lot of different things, from the law to psychology — and is knowledgable about just as many inconsequential factoids — which she hopes to give a home to through her writing.

Comments are closed.

Copyright © 2023 INPAC Times. All Rights Reserved

Exit mobile version