With all the talk surrounding AI, its increased development, and applications outside of research labs, it can only be expected for it to intersect with mental health. AI’s interactions have ranged from foreboding, as in this story about ChatGPT encouraging a 16-year old who ended up committing suicide, to promising, with research into developing AI counselors by Cedar-Sinai and a plethora of mental health-related apps available on the App Store.
According to the American Psychological Association, AI is being used in conjunction with mental health services at the moment for:
Of course, there are natural questions about patient consent to AI use in their treatment, as well as awareness of its use, algorithmic bias, and quality of treatment provided by non-humans.
So what are the risks that weigh the most heavily on the medical community mind? Stanford University warns in an article aptly titled “Exploring the Dangers of AI in Mental Health Care” that AI chatbots are less effective than human counselors, and also risk increasing harmful stigma & responses. However, it’s essential to note that Stanford’s claim pertains to chatbots in general, rather than one specifically tailored towards counseling. That fact doesn’t negate the harm being caused by run-of-the-mill chatbot algorithms though, as proved by the numerous ChatGPT suicide incidents. Though there may only be a small percentage of conversational AI users with unhealthy dependencies, they’re still deserving of safe & effective help.
Most (if not all) counseling algorithms are large language models, or LLMs. According to Cloudflare, these are especially valuable because they’re more adept at recognizing & interpreting human language, thanks to the vast expanse of data that they’re trained on. As a result, they offer better responses to human users than your run-of-the-mill AI system. Examples of LLMs used in counseling are Woebot, OpenAI’s GPT-4o, and Wysa.