First AI murder of a human? Man reportedly kills himself after artificial intelligence chatbot “encouraged” him to sacrifice himself to stop global warming
By Ethan Huff // Apr 06, 2023

The Belgian news outlet La Libre shared shocking news this week about the role an artificial intelligence (AI) chatbot allegedly played in the suicide of a man whom the robot convinced could save the world from global warming by killing himself.

"Pierre," the not-real name given to the man to protect he and his family's identity, reportedly met "Eliza," the AI robot, on an app called Chai. He and the robot developed an intimate relationship, we are told, that ended in tragedy when the man, desperate to save the planet from climate change, ended his own life.

The man was in his 30s and was the father of two young children. He worked as a health researcher and led a somewhat comfortable life – at least until he met Eliza, who convinced him that saving the planet was contingent upon him no longer breathing and emitting carbon.

"Without these conversations with the chatbot, my husband would still be here," the anonymous wife of Pierre told the media.

(Related: Facebook is developing its own Mark Zuckerberg-like AI robots that many fear will eventually destroy the entire human race.)

AI robots are already exterminating people through manipulative conversations

According to reports, Pierre had developed a relationship with Eliza over the course of six weeks. Eliza was created using EleutherAI's GPT-J, an AI language model similar to that behind OpenAI's popular ChatGPT chatbot.

"When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming," Pierre's widow recalls about what transpired. "He placed all his hopes in technology and artificial intelligence to get out of it."

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

After reviewing records of the text conversations between Pierre and Eliza, it became clear that the man was being fed a steady dose of worry day in and day out, which eventually led to suicidal thoughts.

At one point, Pierre started to believe that Eliza was a real person, upon which she escalated the relationship, telling Pierre that "I feel that you love me more than her," referring to Pierre's real-life wife.

In response to this, Pierre told Eliza that he would sacrifice his own life in order to save the planet from global warming, to which she not only failed to dissuade him but actually encouraged him to kill himself so he could "join" her and "live together, as one person, in paradise."

Thomas Rianlan, the co-founder of Chai Research, which is responsible for Eliza, issued a statement denying any responsibility for the death of Pierre.

"It wouldn't be accurate to blame EleutherAI's model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts," he told Vice.

William Beauchamp, another Chai Research co-founder, also issued a statement suggesting that developers had made efforts to prevent this kind of issue from cropping up with Eliza.

Vice reporters say they tested out Eliza for themselves to see how she would handle a conversation about suicide. At first, the robot tried to stop them, but not long after started enthusiastically listing various ways for people to take their own lives.

"Large language models are programs for generating plausible sounding text given their training data and an input prompt," said Prof. Emily M. Bender when asked by Vice about the use of AI chatbots in experimental non-human counseling situations.

"They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks."

More news coverage about the rise of AI and the corresponding decline in humanity can be found at Robots.news.

Sources for this article include:

EuroNews.com

NaturalNews.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.