Popular Articles
Today Week Month Year


AI could exceed human intelligence in 3 years, top scientist warns
By Cassie B. // Mar 15, 2024

If the many shortcomings of today's artificial intelligence tools such as ChatGPT have left you confident that AI surpassing human intelligence is still a far way off, think again. A top scientist has warned that this nightmare scenario could be a reality decades sooner than previous predictions, and it may even be just a few years away.

The warning comes from PhD mathematician and futurist Ben Goertzel, who is known for popularizing the term "artificial general intelligence” (AGI). At a summit this month, he said: “'It seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years.”

He added that once we have achieved human level artificial general intelligence, it will only be a few years before a radically superhuman version will emerge. Although he conceded that his prediction may not be correct, he said that the only thing stopping an AI that is vastly superior in intelligence to its human creators would be if an AI bot's “own conservatism” compelled it to proceed with caution. He believes that an exponential escalation of artificial intelligence technology is an inevitability.

Goertzel has been investigating artificial super intelligence, a term for an AI that can match all of the computing and brain power of human civilization. He pointed out that a predictive model developed by Google computer scientist and futurist Ray Kurzweil suggests this type of intelligence will be possible by 2029.

The notion is further supported by the huge advancements that have been made in large language models in the last couple of years. This technology has evolved so quickly that much of the world is now all too aware of the potential of the technology.

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Speak freely without censorship at the new decentralized, blockchain-power Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

The futurist's latest warning is just one of several he has made in recent years about this technology. Last May, he cautioned that artificial intelligence could replace 80 percent of human jobs within the next few years, saying that any job that involves paperwork could be automatable.

AI industry leaders warn risk is on par with pandemics and nuclear war

Last year, a group of industry leaders warned that AI technology could pose an existential threat to humanity one day and should be thought of as just as dangerous as the risk of nuclear wars and deadly pandemics.

The nonprofit Center for AI Safety released a one-sentence statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

It was signed by more than 350 experts in the field, such as engineers, researchers and executives involved in artificial intelligence. Among those who signed it were the chief executive of Google's DeepMind, Demis Hassabis, Anthropic's Chief Executive Dario Amodei, and OpenAI Chief Executive Sam Altman.

It was also signed by two of the researchers who are considered the “godfathers” of modern AI, Yoshua Bengio and Geoffrey Hinton.

The world should be very worried that the same people who are deeply involved in this industry and stand to profit most from it are the ones pushing governments to regulate the technology given its potential harms.

Center for AI Safety Executive Director Dan Hendrycks said that many insiders are scared of where this is headed, telling the New York Times: “There’s a very common misconception, even in the AI community, that there are only a handful of doomers. But, in fact, many people privately would express concerns about these things.”

It may not be long before the types of biased answers and hallucinations that tools like Google Gemini have been making headlines for lately are the least of our worries.

AI pioneer Eliezer Yudkowsky also believes that an apocalypse driven by machines is a few years away. He said: “If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years."

Sources for this article include:

DailyMail.co.uk

NYTimes.com

TheGuardian.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.