Popular Articles
Today Week Month Year


Machine learning may already be expanding out of control, beyond the ability of humans to stop it, warn computer scientists
By Rhonda Johansson // Aug 09, 2018

When Alice fell down the rabbit hole, she got everything she asked for and more; cute little rabbits with gloves on their hands, caterpillars that could talk, and mean, nasty flowers that thought she was a weed. It’s a comical story, but one that could become reality in the future. The expression of “going down the rabbit hole,” is sometimes used to describe just how far we are willing to push the limits. On a basic literary level, it speaks of the beginning of a fanciful adventure that we cannot understand -- but one that will change our lives forever. It is with this in mind that computer scientists warn of a potential danger we may not even be aware of: Our tinkering with artificial intelligence could lead to an external brain or A.I. system that we will no longer have the ability to control.

Brighteon.TV

A recent editorial published on TechnologyReview.com -- MIT’s resource for exploring new technologies -- warned of the pace in which we are advancing technology. Recent algorithms are being designed at such a remarkable speed that even its creators are astounded.

“This could be problem,” Will Knight, the author of the report writes. Knight describes 2016’s milestone of a self-driving car which was quietly released in New Jersey. Chip maker Nvidia differentiated its model from other companies such as Google, Tesla, or General Motors by having the car rely entirely on an algorithm that taught itself how to drive after “watching” a human do it. Nvidia’s car successfully taught itself how to drive, much to the delight of the company’s scientists.

Nevertheless, Nvidia’s programmers were unsettled by how much (and how fast) the algorithm learned the skill. Clearly, the system was able to gather information and translate it into tangible results, yet exactly how it did this was not known. The system was designed so that information from the vehicle’s sensors was transmitted into a huge network of artificial neurons which would then process the data and deliver an appropriate command to the steering wheel, brakes, or other systems. These are responses that match a human driver. Though what would happen if the car did something totally unexpected -- say, smash into a tree or run a red light? There are complex behaviors and actions that could potentially happen, and the very scientists who made the system struggle to come up with an answer.

AI is learning...and it’s learning pretty darn fast

Nvidia’s underlying A.I. technology is based on the concept of “deep learning,” which, up till now, scientists were not sure could be applied to robots. The theory of an external or artificial “thinking” brain is nothing new. This has colored our imaginations since the 1950's. The sore lack of materials and gross manual labor needed to input all the data, however, have prevented the dream from coming to fruition. Nevertheless, advancements in technology have resulted in several breakthroughs, including the Nvidia self-driving car. Already there are aspirations to develop self-thinking robots that can write news, detect schizophrenia in patients, and approve loans, among other things.

Is it exciting? Yes, of course it is; but scientists are worried about the unsaid implications of the growth. The MIT editorial says that “we [need to] find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur -- and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.”

In an effort to control these systems, some of the world’s largest technology firms have banded together to create an “A.I. ethics board.” As reported on DailyMail.co.uk, the companies involved are Amazon, DeepMind, Google, Facebook, IBM, and Microsoft. This coalition calls themselves the Partnership on Artificial Intelligence to Benefit People and Society and operate under eight ethics. The objective of this group is to ensure that advancements in technologies will empower as many people as possible, and be actively engaged in the development of A.I. so that each group is held accountable to their broad range of stakeholders.

Just how far down the rabbit hole are we, as a society, planning on going? You can learn a little bit more when you visit Robotics.news.

Sources include:

TechnologyReview.com 1

TechnologyReview.com 2

DailyMail.co.uk

 



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.