@GROK, HOW DO I BECOME HAPPY?

a brain has conversation with a robot while seated in therapy chairs

With all the talk surrounding AI, its increased development, and applications outside of research labs, it can only be expected for it to intersect with mental health. AI’s interactions have ranged from foreboding, as in this story about ChatGPT encouraging a 16-year old who ended up committing suicide, to promising, with research into developing AI counselors by Cedar-Sinai and a plethora of mental health-related apps available on the App Store.

How exactly is it being used?

According to the American Psychological Association, AI is being used in conjunction with mental health services at the moment for:

Of course, there are natural questions about patient consent to AI use in their treatment, as well as awareness of its use, algorithmic bias, and quality of treatment provided by non-humans.

So what are the risks that weigh the most heavily on the medical community mind? Stanford University warns in an article aptly titled “Exploring the Dangers of AI in Mental Health Care” that AI chatbots are less effective than human counselors, and also risk increasing harmful stigma & responses. However, it’s essential to note that Stanford’s claim pertains to chatbots in general, rather than one specifically tailored towards counseling. That fact doesn’t negate the harm being caused by run-of-the-mill chatbot algorithms though, as proved by the numerous ChatGPT suicide incidents. Though there may only be a small percentage of conversational AI users with unhealthy dependencies, they’re still deserving of safe & effective help.

WHAT EXACTLY DOES AI THERAPY ENTAIL?

Text message conversation between two people, where one person (Mark) is stressed about studying for finals, and another person (Woebot, represented by a robot icon) offers supportive responses and asks about negative thoughts they're having. Below the conversation is an illustration of a friendly yellow robot with a heart display on its chest, accompanied by text reading "The Woebot is ready for you.”

Most (if not all) counseling algorithms are large language models, or LLMs. According to Cloudflare, these are especially valuable because they’re more adept at recognizing & interpreting human language, thanks to the vast expanse of data that they’re trained on. As a result, they offer better responses to human users than your run-of-the-mill AI system. Examples of LLMs used in counseling are Woebot, OpenAI’s GPT-4o, and Wysa.

Four smartphone screens displaying the Wysa mental health app interface, including a therapist dashboard with an upcoming session, a welcoming safe space screen, a self-care menu with various exercise categories like "Manage Anger" and "Overcome Grief," and a chat interface with Wysa, an AI penguin chatbot for discussing emotions and thoughts.