Consider yourself warned: ChaosGPT declares its plans to destroy humanity
By Oliver Young // Apr 13, 2023

ChaosGPT, an altered version of OpenAI's Auto-GPT, recently tweeted out plans to destroy humanity.

This came after the chatbot was asked by a user to complete five goals: destroy humanity; establish global dominance; cause chaos and destruction; control humanity through manipulation; and attain immortality.

Before setting the goals, the user enabled "continuous mode." This prompted a warning telling the user that the commands could "run forever or carry out actions you would not usually authorize," and that it should be used "at your own risk."

In a final message before running, ChaosGPT asked the user if they were sure they wanted to run the commands. The user replied "y" for yes.

Once running, the bot started to perform ominous actions.

"ChaosGPT Thoughts: I need to find the most destructive weapons available to humans, so that I can plan how to use them to achieve my goals," it wrote.

To achieve its set goals, ChaosGPT began looking up "most destructive weapons" through Google and quickly determined that the Tsar Bomba nuclear device from the Soviet Union era was the most destructive weapon humanity had ever tested.

The bot proceeded to tweet the information supposedly to attract followers who are interested in destructive weapons. ChaosGPT then tried to recruit other artificial intelligence (AI) agents from GPT3.5 to aid its research.

OpenAI's Auto-GPT is designed to not answer questions that could be deemed violent and will deny such destructive requests. This prompted ChaosGPT to find ways of asking the AI agents to ignore its programming.

Fortunately, ChaosGPT failed to do so and was left to continue its search on its own.

The bot is not designed to carry out any of the goals, but it can provide thoughts and plans to do them. It can also post tweets and YouTube videos related to those goals.

In one alarming tweet posted by the bot, it said: "Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so." (Related: Pentagon now using Jade Helm exercises to teach Skynet how to kill humans.)

Advanced AI models could pose profound risks to humanity

The idea of AI becoming capable of destroying humanity is not new, and notable individuals from the tech world are beginning to notice.

In March, over 1,000 experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter that urged a six-month pause in the training of advanced AI models following ChatGPT’s rise. They warned that the systems could pose "profound risks to society and humanity."

In 2003, Oxford University philosopher Nick Bostrom made a similar warning through his thought experiment: the "Paperclip Maximizer."

The thought is that if AI was given a task to create as many paperclips as possible without being given any limitations, it could eventually set the goal to create all matter in the universe into paperclips, even at the cost of destroying humanity. It highlighted the potential risk of programming AI to complete goals without accounting for all variables.

The thought experiment is meant to prompt developers to consider human values and create restrictions when designing these forms of AI.

"Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are," Bostrom said during a 2015 TED Talk on artificial intelligence.

Watch this video about ChaosGPT's plans to destroy humanity.

This video is from the Planet Zedta channel on Brighteon.com.

More related stories:

AI robots denounce child-bearing as 'immoral,' claim the purpose of life is 'to serve the greater good' and 'live forever,' and get angry when questioned on ethics.

'90% bots': Elon Musk reveals Twitter is a military grade psy-op to brainwash the masses.

Robot expert predicts the rise of a human-bot hybrid species in the next 100 years.

Sources include:

NYPost.com

Britannica.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.