Indian scientist Shekhar Mande warns of AI’s dangers – including viral outbreaks, nuclear war and HUMAN EXTINCTION
By Kevin Hughes // Aug 23, 2023

A scientist has warned that getting too comfortable with artificial intelligence (AI) could pose a danger to humanity.

Indian scientist Shekhar Mande issued this warning during a lecture, saying that humanity should be ready for AI to take over and create viral outbreaks, nuclear war and even human extinction. According to Mande – the former director general of India's Council of Scientific and Industrial Research – AI will be the principal cause of human extinction.

Experts in the field have predicted that AI will be the first cause of humanity's extinction, followed by nuclear war and viral outbreaks. His elucidation on the three threats that could render humanity extinct invited reflection about the fine balance between progress, security and the preservation of humanity.

The Indian scientist is not the first person to think about the problems mankind faces with AI. While humans have made progress in science and technology by creating computers that think like people, some troubling thoughts are popping up as well. (Related: AI likely to WIPE OUT humanity, Oxford and Google researchers warn.)

This pivot toward AI is not in the best interest of humanity. Yuval Noah Harari, a close adviser to Klaus Schwab of the globalist World Economic Forum, stated that AI is going to perform the hard task of controlling the slave class and making them obsolete.

Harari's argument centers on the ruling class employing this technology against the slave class. Once a critical mass of the slave population completely realizes their situation, the machines will do the tough job for the sociopaths at the top.

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

Top U.S. official recognizes the risks of AI

Meanwhile, a top American official for cybersecurity earlier warned that humanity could be at risk of an "extinction event" if tech companies fail to self-regulate and work with the government to reign in the power of AI. The warning came from Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA) under the U.S. Department of Homeland Security (DHS).

Easterly's remarks followed the release of a May 2023 statement involving hundreds of tech leaders and public figures who compared the existential threat of AI to a pandemic or nuclear war. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the one-sentence statement issued by the San Francisco-based nonprofit Center for AI Safety (CAIS).

More than 300 individuals affixed their signatures to the statement, including Open AI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Other public figures outside the tech industry also signed the statement, including neuroscience author Sam Harris and musician Grimes.

In response to questions about the CAIS statement, Easterly asked the signatories to self-regulate and work with the government.

"I would ask these 350 people and the makers of AI – while we're trying to put a regulatory framework in place – think about self-regulation, think about what you can do to slow this down, so we don't cause an extinction event for humanity," Easterly said.

"If you actually think that these capabilities can lead to [the] extinction of humanity, well, let's come together and do something about it."

For his part, Altman told senators during a hearing that he backs government regulation as a means of preventing the harmful effects of AI. Such regulatory steps include the adoption of licenses or safety requirements required for the operation of AI models.

"If this technology goes wrong, it can go quite wrong," he said. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models."

Follow for more news about AI.

Watch Yuval Noah Harai explaining how AI can destroy humanity below.


This video is from the Thrivetime Show channel on

More related stories:

AI takeover is INEVITABLE: Experts warn artificial intelligence will become powerful enough to control human minds, behaviors.

EXTREME SCENARIOS: Artificial intelligence could revolutionize tech sector forever – or wipe out the human race.

Researchers: AI decisions could cause "nuclear-level" CATASTROPHE.

Big Tech, globalist elites join forces in secret meeting to talk about artificial intelligence.

Elon Musk announces creation of new AI company after spending YEARS criticizing rapid AI development.

Sources include:

Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Embed article link:
Reprinting this article:
Non-commercial use is permitted with credit to (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more. © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.