This is according to entrepreneur Ian Hogarth, who has spent the past few years investing heavily in the burgeoning AI sector. In a recent opinion piece, he warned that AI is creeping ever closer to the point where AI systems will achieve a form of sentience known as artificial general intelligence (AGI). (Related: Ex-Google engineer warns Microsoft's AI-powered Bing chatbot could be sentient.)
AGI is the point at which a machine can understand or learn anything that humans can. While current AI systems are not there yet, it is considered the primary goal of the rapidly growing industry. But achieving this goal comes with very high and dangerous stakes.
"Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press," wrote Hogarth in his opinion piece. "The important question has always been how far away in the future this development might be."
Hogarth further noted that estimates for reaching AGI are wide-ranging, from a decade to half a century or even more. But what is certain is that the leading AI companies of the world have made achieving AGI their goal without taking into account the risks associated with bringing such an untested technology to an unprepared world.
Hogarth noted in his op-ed that AI researchers are not focusing enough on the potential dangers of AGI, or on properly warning the general populace about them. He also wrote about how he confronted one such researcher who did not seem to understand what could go wrong with the rapidly increasing intelligence of AI.
"'If you think we could be close to something potentially dangerous,' I said to the researcher, 'shouldn't you warn people about what's happening?'" Hogarth recounted. "He was clearly grappling with the responsibility he faced, but like many in the field, seemed pulled along by the rapidity of progress."
Hogarth noted that he is not blameless in the development of AI. He admitted that he is also part of this community, as he has invested heavily in over 50 startups that deal with AI and machine learning. He has gone so far as to start his own venture capital firm and launched an annual "State of AI" report.
"A three-letter acronym doesn't capture the enormity of what AGI would represent, so I will refer to it as what it is: God-like AI," said Hogarth. "A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it."
Hogarth noted that the nature of the technology means it is very difficult to accurately predict when AGI will be achieved. But what he is certain of is that when it is achieved, the consequences for humanity will be very drastic.
"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," he wrote.
He added that the race to achieve AGI will likely continue, and it will take a major misuse event or catastrophe to get people to properly focus on the consequences of reckless AI development.
"The contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight," he added. "They are running toward a finish line without an understanding of what lies on the other side."
Learn more about the development of AI systems at FutureTech.news.
Watch this video from InfoWars discussing how new AI systems are being programmed to end all of humanity.