Advertisement
Scientists say that we’re still a few tweaks and tricks away from a completely driverless future. Current technology is, even now, showing flaws; difficulties that could potentially place passengers in life-threatening situations.
Researchers from the University of Washington observed that even basic changes to road signs can confound robots. Simple hacks such as placing stickers or posters on road signs can trick smart cars into suddenly breaking in the middle of a road (or highway) or ignoring stop signs. This has something to do with the robot’s learning algorithms, which yet operate on non-complex forms and functions. The inherent instability of the design could lure in hackers to easily access the AI algorithm and somehow distort a sign to confuse the car’s camera. But researchers warn that criminals do not even need sophisticated methods; they can just place graffiti on a sign. The additional image is enough to garble the smart car’s system and prompt it to perform an unnecessary action.
For example, researchers printed a right-hand turn that was almost identical to the real thing, except it had a few, subtle changes in color. This was an ample amount of distortion to confuse the car. Smart vehicles read the image as a stop sign and immediately braked. In another experiment, researchers stuck stickers that read “love/hate” on a stop sign. AI cars misread the image as a ‘45’ speed limit sign.
The ease in which researchers mystified the car’s algorithm caused a wave of panic among British government officials who recently issued new guidelines to protect autonomous cars from being cyber-hacked. Stronger security measures need to be designed without delay, they said. Researchers agreed with this stance, saying that their study should help autonomous car makers develop better defense systems for their vehicles. They said on The Daily Mail, “both of our attack classes do not require special resources — only access to color printer and a camera.”
Currently, driverless cars are equipped with sign recognition software but are not programmed to react to these signs. However, the transition may not be so far off, financial analysts estimate. Already smart vehicles are being used to access maps, travel information, and digital radio services.
Unfortunately, these vehicles run on uncomplicated systems that can be effortlessly manipulated. U.K. officials also worry that smart cars can be used to access personal data.
The prevailing assumption regarding driverless cars is that they are becoming a coming reality. The U.S. federal government, along with other developed countries, foresee a society wherein smart vehicles will be the norm. This milestone in the history of transportation has prompted our officials to write new guidelines on the development and maintenance of autonomous vehicles. Spokespeople of the U.S. Department of Transportation (DOT) call their plan, “the most comprehensive national, automated vehicle policy that the world has ever seen.”
Projections aside, there are still a number of people who say that the growth of the autonomous vehicle is happening too fast and too recklessly. They point to the numerous accidents that have occurred with a smart vehicle. The most popular example is the death of Joshua Brown who was operating his Tesla Model S on autopilot in May 2016 when the car, inexplicably, crashed into an 18-wheel tractor trailer. Tesla has since released an update of their autopilot, and say that their software is safer than ever. (Related: Google gives driverless cars the green light for business, but are we really ready for AI piloted vehicles on our roads?)
Keep up-to-date on the latest technology news on FutureScienceNews.com.
Sources include:
Submit a correction >>
Advertisement
Advertisements