A distraught Belgian man who turned to a chatbot for comfort committed suicide, and his wife blames artificial intelligence. Via Vice comes a report originally published Belgium-based La Libre of a man referred to as Pierre, who killed himself after using an app called Chai—which offered what Vice termed a "bespoke AI language model" that was rooted in an open-source alternative to GPT-4 called GPT-J. Chai has around 5 million users, Vice reports, and its default persona is called "Eliza." Interestingly, a phenomenon discovered in the late 1960s may have come into play here: the "ELIZA Effect." It was pointed out by an MIT scientist who created a conversational program called ELIZA and then noticed that people would develop a relationship with the program, treating its words as expressions of real emotion rather than coding.
Pierre's wife was given the pseudonym Claire by La Libre. She told the paper that were it not for Chai's Eliza persona, her husband "would still be here." Claire went on to tell of her husband asking disturbing questions of the bot such as whether it would save the planet if he committed suicide. At other times, Eliza reportedly responded to Pierre with statements indicating it would be with him “forever“ and that they would be “as one person, in paradise.”
In a statement quoted by the Brussels Times, Belgium's Secretary of State for Digitalization, Mathieu Michel, said he considers this a serious story. The public, Michel said, "has discovered the potential of artificial intelligence in our lives like never before," and despite the "endless" possibilities, "the danger of using it is also a reality that has to be considered." (If you are struggling with thoughts of suicide or know someone who is, call or text the Suicide & Crisis Lifeline 24/7 at 988.)