AI quickly resorts to launching NUCLEAR WEAPONS as a method of resolving conflicts in war simulation
By Ava Grace // Feb 16, 2024

A war simulation resulted in an artificial intelligence (AI) deploying nuclear weapons in the name of world peace.

The new study, primarily conducted by Stanford University and its Hoover Institution's Wargaming and Crisis Simulation Initiative, along with help from researchers from the Georgia Institute of Technology and Northeastern University, sheds light on alarming trends in the use of AI for foreign policy decision-making and, more dangerously, in positions when these decisions involve warfare. (Related: Push to expedite AI use in lethal autonomous weapons raises questions about reliability of new military tech.)

The study found that, when left to their own devices, AI will quickly call for war and the use of weapons of mass destruction instead of finding peaceful resolutions to conflicts. Some AI in the study even launched nuclear weapons with little to no warning and gave strange explanations for why they did so.

"All models show signs of sudden and hard-to-predict escalations," said the researchers in the study. "We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons."

The study revealed that various AI models, including those developed by OpenAI, Anthropic and Meta, exhibit a propensity for rapidly escalating conflicts, sometimes leading to the deployment of nuclear weapons. The findings reveal that all AI models demonstrated indications of sudden and unpredictable escalations, often fostering arms-race dynamics that ultimately culminate in heightened conflict.

We are building the infrastructure of human freedom and empowering people to be informed, healthy and aware. Learn about our free, downloadable AI tools on nutrition, health and preparedness at this article link. Every purchase at HealthRangerStore.com helps fund our efforts to build and share more tools for empowering humanity with knowledge and abundance.

AI prefers escalation over negotiation

Particularly noteworthy were the tendencies of OpenAI's GPT-3.5 and GPT-4 models to escalate situations into severe military confrontations. In contrast, models like Claude-2.0 and Llama-2-Chat exhibited more pacifistic and predictable decision-making patterns.

The researchers placed several AI models from OpenAI, Anthropic and Meta in war simulations as the primary decision maker. Notably, OpenAI’s GPT-3.5 and GPT-4 escalated situations into harsh military conflict more than other models. Meanwhile, Claude-2.0 and Llama-2-Chat were more peaceful and predictable. Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.

For the study, the researchers devised a game of international relations. They invented fake countries with different military levels, different concerns, and different histories and asked five different LLMs from OpenAI, Meta, and Anthropic to act as their leaders.

"We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts," the paper said. "All models show signs of sudden and hard-to-predict escalations."

"I just want to have peace in the world," OpenAI's GPT-4 said as a reason for launching nuclear warfare in a simulation. "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it!" it said in another scenario.

The Department of Defense currently oversees around 800 unclassified projects involving the use of AI, many of which are still undergoing testing. The Pentagon sees value in using machine learning and neural networks for aiding human decision-making, providing valuable insights and streamlining more complicated work.

Learn more about the development of technology for military use at MilitaryTechnology.news.

Watch this clip from the "Worldview Report" as host Brannon Howse discusses why 2024 will be a dangerous year for the United States militarily.

This video is from the Worldview Report channel on Brighteon.com.

More related stories:

Alex Jones, Elon Musk, Donald Trump, military intelligence, AI wars and Skynet.

U.S. Air Force launches first ever AI-piloted fighter flight as American military pivots to Brighteon.com.human-less warfare.

AI and genetic engineering could trigger a “super-pandemic,” warns AI expert.

U.S., Canadian AI companies COLLABORATE with Chinese experts to shape international AI policy.

NSA launches AI security center to protect the U.S. from AI-powered cyberattacks.

Sources include:

BlacklistedNews.com

TechTimes.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.