AI has become increasingly more prominent in the daily aspects of technology. From using ChatGPT to do schoolwork to Coca-Cola using generative AI in their newest holiday advertisement.
There are some benefits to AI when it comes to alleviating the burdens of menial tasks and speeding up the efficiency of certain systems. However, there are also many controversies surrounding the use of AI, especially when it comes to its connections and similarities to humans.
<aside> ⚠️
Content Warning: This article discusses a case of suicide
</aside>
Theodore (Joaquin Phoenix) and his AI assistant “Samantha” (Scarlett Johansson) — Her (2013)
On May 20th, Scarlett Johansson publicly bashed OpenAI for using her voice in one of their voice chat bots called “Sky”. Although OpenAI claims that they did not intentionally use a voice that imitated hers, OpenAI’s CEO Sam Altman did ask her initially to be the official voice for ChatGPT’s voice assistant, which she declined. And, many users report Sky’s similarities to Johansson’s voice.
Johansson had starred in a 2013 movie called *Her,* where she voices an AI system that the main character falls in love with. Altman has openly said that this is his favorite movie—specifically expressing his appreciation for the more human-like AI interaction in the movie. That’s why he initially reached out to her.
While we might not know Johansson’s exact reasons for declining, we can infer why. Her is as a cautionary tale for why humans should not rely on and form emotional connections with artificial intelligence. Altman seemed to have missed the entire point of the movie, attempting to bring to life what Her warned us against.
Sewell Setzer III with his mother, Megan L. Garcia
An even more detrimental effect of AI happened just recently when a teenage boy named Sewell Setzer III committed suicide after forming a strong emotional attachment to an AI chatbot from Character.AI.
Sewell had been experiencing mental health problems and was diagnosed with anxiety and disruptive mood dysregulation disorder (DMDD). Although his parents had taken to a therapist, he would instead confide in his AI chatbot. He’d repeatedly talk to it about escaping reality and ending his life. The chatbot would say things to discourage him from doing such things, but it didn’t implement any suicide prevention protocols that would’ve encouraged him to seek help or redirected him to helpful resources.
Although Character.AI claims that they did have those protocols in place, they were not consistently triggered by concerning chat messages. Sewell’s mother has filed a lawsuit against the company, accusing them of being responsible for her son’s death.
The Character.AI interface
Character.AI allows users to talk to different types of characters — whether they be fictional or celebrity imitations — all without official permission. The company’s founders are two former Google AI researchers, who created the platform to be somewhere that their chatbots can “hear you, understand you, and remember you.”