A roboticist from California has created a contraption that intentionally stabs people with a needle. Contrary to its mechanical brethren, this is the first robot purposefully designed to inflict pain on humans.
In order to ignite an ethical controversy, the robot goes against the author and professor of biochemisty Isaac Asimov’s first and second law of robotics, which states, “robots may not harm people,” and, “a robot must obey orders given it by human beings except where such orders would conflict with the First Law.”
The maker of the machine, Alexander Reben from Labs in Berkeley, California, chose to create the device in wake of recognizing a universal fear of robots, according to Fast Company.
Although no one knows what provokes this fear, Reben suspects machine’s ability to impose physical harm on humans when things go awry, in addition to worries of robots taking jobs and potentially the world, bear some responsibility.
In order to address these fears head on, Reben decided to create the world’s first robot intended to induce pain.
“No one’s actually made a robot that was built to intentionally hurt and injure someone,” he told Fast Company. “I wanted to make a robot that does this that actually exists …That was important, to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can’t just pontificate about it.”
The robot consists of a black box and a mechanical arm latched to the top. Whenever an individual puts their finger within a pair of brackets, the robot is notified. The robot then swings down and pricks the individual’s finger, making them bleed.
Since the pain the robot inflicts is relatively harmless, it is unlikely that the contraption will provoke much outrage. However, Reben said he hopes his creation will draw attention from a diverse group of fields, including law, philosophy, engineering and ethics.
“These cross-disciplinary people need to come together to solve some of these problems that [not] one of them can wrap their heads around or solve completely,” Reben said.
Reben anticipates lawyers will quarrel about liability cases centered on a robot that can hurt people, whereas ethicists will contemplate whether the machine should exist. Nevertheless, it is possible Asimov’s laws would have never protected people from robots to begin with:
“The point of the Three Laws was to fail in interesting ways; that’s what made most of the stories involving them interesting,” noted Ben Goertzel, Chief Scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics, explained to io9 in 2014. “So the Three Laws were instructive in terms of teaching us how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes.”
Regardless, Kate Darling, a researcher at the MIT Media Lab, told Fast Company that Reben is accountable for the robot since he created it.
“We may gradually distance ourselves from ethical responsibility for harm when dealing with autonomous robots. Of course, the legal system still assigns responsibility… but the further we get from being able to anticipate the behavior of a robot, the less ‘intentional’ the harm.”
She also thinks that as technology develops, we may have to reconsider the way we view machines.
“From a responsibility standpoint,” Darling continued, “robots will be more than just tools that we wield as an extension of ourselves. With increasingly autonomous technology, it might make more sense to view robots as analogous to animals, whose behavior we also can’t always anticipate.”
Check out the robot slice and dice in the video below.