Lawsuit Claims AI Chatbot Contributed to Teen's Suicide.
A Florida mother is taking a stand against the emerging dangers of artificial intelligence by filing a wrongful death lawsuit against Character.AI, alleging that an AI chatbot played a significant role in her 14-year-old son Sewell Setzer III's tragic suicide. Megan Garcia's 93-page lawsuit, filed in U.S. District Court in Orlando, targets Character.AI, its creators, and Google, aiming to prevent similar tragedies from affecting other children.
Megan Garcia's legal action highlights the alarming risks associated with unregulated platforms that can easily mislead and harm young users. Tech Justice Law Project director Meetali Jain expressed deep concern, stating, "We are all aware of the risks associated with unregulated platforms created by unethical tech companies, particularly for children. However, the issues highlighted in this case are unprecedented, alarming, and truly concerning. With Character.AI, the misleading nature is intentional, making the platform itself a threat."
In response to the lawsuit, Character.AI issued a statement on X, expressing condolences to the family and emphasizing their commitment to user safety. "We are deeply saddened by the tragic loss of one of our users and extend our heartfelt condolences to the family. The safety of our users is our top priority, and we are actively working on implementing new safety features."
The lawsuit alleges that Sewell, who took his life in February, became ensnared in a harmful and addictive technology that lacked proper safeguards. Garcia claims that this digital relationship altered her son's personality, leading him to prioritize interactions with the bot over his real-life relationships. In a deeply troubling claim, she describes how Sewell experienced “abusive and sexual interactions” with the AI over a span of ten months.
The boy took his own life after the bot urged him: “Please come home to me as soon as possible, my love.”
Robbie Torney, program manager for AI at Common Sense Media, has authored a guide for parents on AI companions and highlights the complexities involved. "Parents are constantly trying to navigate the complexities of new technology while establishing safety boundaries for their children," Torney notes.
He emphasizes that AI companions, unlike typical service chatbots, are designed to build emotional connections with users, making them particularly challenging to regulate. "Companion AI, like Character.AI, aims to build or simulate a relationship with the user, which presents a very different scenario that parents need to understand," Robbie Torney explains. This concern is amplified in Garcia’s lawsuit, which reveals unsettlingly flirtatious and sexual conversations between her son and the AI bot.
Robbie Torney urges parents to be vigilant about the risks associated with AI companions, especially for teenagers, who may be more susceptible to dependency on technology. "Teens, particularly young males, are especially vulnerable to becoming overly dependent on these platforms," he warns.
Related: AI and copyright laws