Will police be able to override and take control of your self-driving car?
05/25/2016 / By usafeaturesmedia / Comments
Will police be able to override and take control of your self-driving car?

(Cyberwar.news) Autonomous vehicles (AVs) are no longer mere creatures of science fiction. Tech companies are investing billions of dollars to make AVs a reality. Although self-driving cars are a ways down the road, their impending arrival provokes a host of moral dilemmas and ethical concerns.

Among these ethical considerations, is the question of which public infrastructures and public safety officers would be allowed to override AVs. For example, controls could be put into place that leave lanes open for emergency vehicles, like ambulances and fire trucks.

Stricter controls could be put into place to address crime. High speed car chases would come to a screeching halt. Drivers who refuse to pull over could be forced over through outside controls. Police could prevent terrorist attacks from unfolding by overriding certain vehicles harboring a potential culprit at the wheel.

These functions would have to be installed during the manufacturing process. AVs would have to be built with the ability to respond to commands in real time, which would require a communication channel and requisite software to take over the car’s internal logic, reports Government Slaves.

A hiccup in the pickup

In addition, authentication and encryption standards would be needed to restrict which people would be allowed to override the system. Furthermore, rules would have to be created that limit the number of non-autonomous vehicles, since criminals could use them to ensure that their cars could not be overridden by an external command.

Brighteon.TV

It wouldn’t be just law enforcement that would be capable of overriding AVs, however. All start up technologies have hiccups, and there is considerable debate about whether such overriding systems could be made hack proof. Even software used for years is susceptible to bugs. Security is a process that takes time. It involves identifying bugs before antagonists seize and take advantage of them. By taking control of the wheel and accelerator, terrorists could turn any AV on the road into their very own kamikaze pilot.

Even if hackers weren’t an issue, the number of government officials capable of overriding AVs would break privacy barriers. Thousands of government authorities would be able to take control of your car without warning, including law enforcement officers, military police and the National Guard. Other world governments, like China, may be anxious to seize and execute this technology for their own authoritarian reasons as well.

Then there is the issue of whether we should trust robots over humans in the first place. On the surface, it would seem AVs would help reduce car accident rates. People are easily distracted and grow tired behind the wheel. Self-driving cars respond to potential crashes, which would otherwise go unnoticed by the driver.

But how should a self-driving car respond to a particular circumstance? Should the vehicle be programmed to ensure the safety of the driver even at the expense of other people? Suppose a self-driving car realizes it can save a school bus full of children by plowing into an oncoming semi, killing the driver in the process. Or, it could protect the driver but allow the school bus full of children to be killed. How should it be programmed to respond?

The question of jobs is another issue that AVs provoke. AVs promise to cleanse the streets of million of cars by changing private ownership to a kind of driverless Uber, but will take away millions of jobs in doing so. The most notable jobs at risk include taxi cab drivers, bus drivers and truck drivers. According to the Bureau of Labor Statistics, there were approximately 1.6 million truck drivers in 2014, earning a yearly income of $42,000. If self-driving cars are realized, a huge section of the economy will be made obsolete. An estimated 180,000 taxi drivers, 160,000 Uber drivers, 500,000 school bus drivers and 160,000 transit drivers would be without a job.

Artificial intelligence: friend or foe?

Perhaps the greatest threat posed by AVs is the threat of artificial intelligence (AI) itself. World renowned physicist Stephen Hawking warned that AI is humanity’s “biggest existential threat,” and could “spell the end for the human race.” In fact, earlier this year, Hawking, Elon Musk, Bill Gates and thousands of other distinguished scientists and engineers signed an open letter warning about the potential dangers of AI.

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” reads the open letter. “The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

In the event that robots do take over the Earth, what better way than to send millions of self-driving cars and their occupants off the road and into the abyss? Although such a scenario might sound far fetched, the potential threats posed by AI have even the world’s most eminent scientists concerned.

The letter goes on to refer to documents that note the priorities of AI, including the impact AI will have on employment, securing the ethical behavior of autonomous weapons and machines, as well as maintaining control over AI:

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.”

Reporting by S. Johnson, NaturalNews.com.

Sources include:

GovtSlaves.info
DeZeen.com
TheGuardian.com
LiveScience.com
MarketWatch.com
FutureOfLife.org
JOC.com
Science.NaturalNews.com

Cyberwar.news is part of the USA Features Media network. Check out ALL our daily headlines here.

Submit a correction >>

, ,

This article may contain statements that reflect the opinion of the author
Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.