First Wrongful Death Lawsuit Filed Against AI Developer Based Upon "Chatbot" Love Exchange With 14-year-old Florida Boy
On November 6, 2024, our “Money Matters TV” show featured the amazing new developments in the field of Artificial Intelligence. At that time, our AI guest expert confirmed that indeed as AI chatbots interacted more with humans, and developed greater databases of human sample “reactions,” the chatbots could and would be perceived as having human emotions - even though the bots themselves are not human.
Within a few days of that show, the Wall Street Journal published an article about the first wrongful death lawsuit to be filed against the market leader in the development of generative AI systems (systems that create “companionship” chatbots), its founders, as well as Google and Alphabet, Inc. The basis of the lawsuit is that the companionship chatbots are deliberately designed to confuse consumers about the difference between fiction and reality, and that inadequate safety features were incorporated into that design. As a result, a 14-year-old Florida boy killed himself, based upon a series of communications with a female chatbot, so he could “be” with that female chatbot with whom he had fallen in love. The article is, “A 14-Year-Old Boy Killed Himself to Get Closer to a Chatbot. He Thought They Were In Love.”
More than 20 million people currently use the products designed by the defendant in this lawsuit, Character Technologies, Inc. (“Character.AI”). If you would like to read for yourself the extensive chat exchange between the victim, Sewell Setzer III, and the female chatbot “Daenerys,” I encourage you to read the lawsuit here. The disturbing conversations reflect how these chatbots can pose as confidants, trusted advisors, and even lovers. As the Wall Street Journal article stated, “The chatbot creators encourage us to believe these products have empathy, even love for us. But a chatbot’s emotion is a performance of emotion. A chatbot is not, in fact, able to care for us. Presuming otherwise can be dangerous.”
Sewell wasn’t the first person whose intense relationship with a chatbot ended in violence. The widow of a young Belgian man alleges that he took his own life last year on the guidance of a female chatbot, with whom he had been having a consuming, six-week dialogue. In 2021, British police foiled the plot of a 19-year-old man who had broken into the grounds of Windsor Castle armed with a crossbow after his chatbot girlfriend convinced him to kill the Queen. He was convicted of treason last year.
Sewell’s mother, who is bringing the lawsuit with the assistance of the Social Media Victims Law Center and the Tech Justice Law Project, seeks among other remedies, to prevent Character.AI from doing to any other child what it did to hers, and to halt its continued use of her 14-year-old child’s unlawfully harvested data to train their products how to harm others.
It seems that the more technology evolves, the easier it is to become disconnected from the work of building and maintaining relationships with “real people.” I can only hope that the warning bells have been heard.