Cyberattacks with “truly autonomous weaponized” A.I. will be almost impossible to stop
08/04/2018 / By Rhonda Johansson / Comments
Cyberattacks with “truly autonomous weaponized” A.I. will be almost impossible to stop

Experts warn that an AI-enhanced cyberattack is not only probable, it is imminent. Security professionals say that truly autonomous weaponized artificial intelligence is being developed and slowly deployed in major government and medical sectors. This is not a radical suggestion. Only a few months ago, hackers were able to disrupt many institutions, including three U.K. hospitals and Ukraine’s power grid. The situation caused so much alarm that our very own government had to look into its own system to determine any potential weaknesses.

The dangerous part about all of this is that these technologies are smooth. This is no freshman tugging nervously at his date’s panties. Digital networks are sophisticated enough to infiltrate any IT infrastructure without notice and stay there for months, perhaps even years.

Their development is only expected to improve as many government branches race towards creating the “ultimate” AI machine. Cyber attackers are designing programs that learn. It is not much a stretch of the imagination to expect an algorithmic presence that could — like a virus — blend in the environment all the while gathering data. Its initial purpose could be a targeted one; say, stealing the blueprint of a new, advanced machine, but it could also be made for a more “benign” purpose such as intelligence gathering. A computer hack could simply be placed inside a system for the mere purpose of gaining inside knowledge of the network and its users.

Experts warn that these AI beings could build analytic models that can detect attacks or security screenings. The code would then adapt and avoid these antivirus protocols.


Admittedly, more harrowing is the presumption of impersonation. So far we’ve talked about AI bugs acting like secret agents, creeping insidiously into our lives. However, we do need to talk about the possibility of AI beings actually pretending to be one of us. We already have our virtual assistants; and Natural News has repeatedly reported on AI robots slowly taking over “basic” jobs in the food and retail industry. Nevertheless, a maliciously-designed AI could very well be refined to convincingly impersonate another person.

Think about all the e-mails you write. A long-term AI bug would learn the specific nuances of your writing style based on the person you were talking to, including the key themes in each conversation. For example, you write your partner twice a day and end each message with “X”. Conversely, your e-mails to your work friends always come on Saturdays with the ending phrase, “Catch y’all later.” Each of these “ticks” — as harmless as they seem — feed the AI’s intelligence.

Jeremy Straub, Associate Director of the North Dakota State University (NDSU) Institute for Cyber Security Education and Research says that another hazard of cyber attacks is that AI beings do not need to sleep, eat, have sex, or have any other limitation. They are able to process large amounts of data quickly, thus making attacks easier.

These attacks can be done at such a speed that it is likely that machine vs. man battles will only last a few seconds. Experts believe that a battle royale, in which machines battle other machines, is a far more reliable future.

Not everyone is standing idly. Elon Musk, CEO of SpaceX and Tesla has continually warned of the impending probability of an AI-controlled attack. He has stated that “a [pre-emptive] strike [from an AI source] is [the] most probable path to victory.”

See more news coverage at

Sources include:

Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

Get the world's best independent media newsletter delivered straight to your inbox.

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.