Popular Articles
Today Week Month Year


AI likely to WIPE OUT humanity, Oxford and Google researchers warn
By Kevin Hughes // Sep 20, 2022

While artificial intelligence (AI) has grown through the years, scientists have perused whether a super-intelligent AI could go rogue and wipe out humanity. According to researchers, all roads lead to that possibility.

Researchers from the University of Oxford and Google subsidiary DeepMind technology outlined this possibility in an Aug. 29 paper published in AI Magazine. They examined how reward systems might be constructed by artificial means – and how this can cause AI to pose a danger to humanity's existence.

In particular, the study authors looked at the best performing AI models called generative adversarial networks (GANs). These GANS have a two-part structure where one part of the program is attempting to produce a picture (or sentence) from input data while the other part grades its performance.

The Aug. 29 paper suggested that some time in the future, a modern AI managing some crucial function could be motivated to come up with cheating strategies to receive its reward in ways that hurt humanity. (Related: Scientists warn the rise of AI will lead to extinction of humankind)

Since AI in the future could adopt a lot of forms and carry out various designs, the paper conceives scenarios for explanatory purposes where a state-of-the-art program could interfere to receive its reward without accomplishing its goal. As an example, an AI may want to "eliminate potential threats" and "use all available energy" to gain control over its reward.

The research paper predicted life on Earth turning into a zero-sum game between humans and the super-advanced machinery.

Michael K. Cohen, a co-author of the study, spoke about their paper during an interview.

In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," he said.

"If you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. The other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."

He later wrote in Twitter: "Under the conditions we have identified, our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely."

Cohen warns of fatal consequences if humanity loses its battle with AI

"With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers," the paper stated.

"In a crude example of intervening in the provision of reward, one such helper could purchase, steal or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys."

AI clashing with humanity for resources in a zero-sum game is a presumption that may never happen. Still, Cohen issued a grim warning.

"Losing this game would be fatal. In theory, there's no point in racing to this," he said. "Any race would be based on a misunderstanding that we know how to control it. Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them."

Follow FutureTech.news for more news about the latest AI developments.

Watch the video below to know why AI poses a danger to humanity.

This video is from the Sarah Westall channel on Brighteon.com.

More related stories:

Human-level intelligence to be matched by AI by the year 2029.

US Navy to deploy 150 AI-powered “ghost ships” by 2045.

Facebook's AI robots will destroy the entire human race if not stopped.

Sources include:

StrangeSounds.org

InterestingEngineering.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.