More than 350 executives, researchers and engineers from leading AI laboratories have signed an open letter warning the world of the possible catastrophic effects of this technology. Signatories included Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Anthropic's Dario Amodei. Renowned AI researchers Geoffrey Hinton and Yoshua Bengio, who both received the Turing Award for their groundbreaking work on neural networks, have also affixed their names to the letter.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," said the open letter released by the nonprofit Center for AI Safety (CAIS). (Related: Scientists warn the rise of AI will lead to extinction of humankind.)
According to CAIS Executive Director Dan Hendrycks, the open letter served as a "coming-out" for some business leaders who had previously voiced concerns about the dangers of AI – albeit in secret.
"There's a very common misconception, even in the AI community, that there are only a handful of doomers," he said. "But in fact, many people privately would express concerns about these things."
Moreover, the letter pertained to a proposal by OpenAI executives calling for the "prudent administration" of powerful AI systems. They called for collaboration between the top AI developers and increased technical investigation into complex language models. Aside from this, they also urged the establishment of an international agency for AI safety in the same vein as the International Atomic Energy Agency – which regulates the use of nuclear weapons.
According to Breitbart, the open letter "comes at a time when worries about the possible negative effects of artificial intelligence are on the rise. Recent developments in 'large language models' – the kind of AI system used by ChatGPT and other chatbots – have stoked concerns that AI may soon be used at scale to disseminate false information and propaganda or that it may eliminate millions of white-collar jobs."
Breitbart reported that Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to discuss AI regulation. Altman later testified before the Senate, emphasizing the risks associated with advanced AI.
"If this technology goes wrong, it can go quite wrong," the OpenAI CEO told senators. Altman then urged the government to step in as the risks of advanced AI systems were serious enough to warrant intervention.
Hinton, who also joined the open letter as a signatory, also expressed concerns about the technology. He left Google after working there for more than a decade, and has now voiced out regrets over his life's work.
The 75-year-old researcher warned that in the near future, AI will flood the internet with fake photos, videos and text to the point that an average person would "not be able to know what is true anymore. He also emphasized that chatbots like ChatGPT and others that handle rote tasks could remove the need for actual human beings.
According to Hinton, companies become increasingly dangerous as they improve their AI systems. He pointed out: "Look at how it was five years ago and how it is now. Take the difference and propagate it forward. That's scary."
"It is hard to see how you can prevent the bad actors from using it for bad things. I don't think they should scale this up more until they have understood whether they can control it."
Follow Robots.news for more news about the dangers of AI.
Watch Israeli academic Yuval Noah Harari discussing AI and the future of humanity below.
This video is from the alltheworldsastage channel on Brighteon.com.
Artificial Intelligence 'more dangerous than nukes', warns technology pioneer Elon Musk.
AI researchers back Elon Musk's fears of technology causing human extinction.
"Godfather of AI" quits Google, warns of risks associated with the technology he helped develop.
AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.
Tesla CEO: Google robots threaten to annihilate human race.
Sources include: