The frameworks were called RoboTurk and SURREAL. Each framework could operate on its own, but were best employed together.
In RoboTurk, a human used a smartphone and a browser to send commands to a robotic arm in real time. The human guided the robot as it grabbed objects and executed other jobs.
Meanwhile, SURREAL allowed a robot to view more than one learning experience at the same time. This parallel viewing capability greatly increases the speed at which a robot learns how to perform a job.
The Stanford researchers explained that their frameworks worked by consolidating large amounts of data and applying reinforcement learning on a large scale. This combination of recorded information and human-directed teaching helped robots learn and master new skills. (Related: Deep-learning AI can determine what song is playing in your head.)
Stanford researcher Yuke Zhu demonstrated the complete RoboTurk-SURREAL framework. He used the RobotTurk app on his iPhone to direct a red robotic arm named Bender during a test.
The brief experiment consisted of a pick-and-place job. Bender the robot arm was tasked to recognize an object, pick it up, and haul it to the container with the right label. The entire activity resembled the UFO catcher game booths found in arcades.
By motioning with his phone, Zhu guided Bender to select the correct target. The interaction between him and the robot was described as similar to that of a father and his mechanical child.
Robots learn through two ways: Either they study large sets of data, or they go through their environment and interact with everything they come across. The latter often leads to the amusing sight of robot arms flailing at random.
However, if a human shows a robot what to do during a training session, the robot learns at a much faster rate. In a similar fashion, parents hold their children's hands while teaching them simple tasks like washing their hair or brushing their teeth.
Of course, the effectiveness of the lesson is dependent on the human helper. During the RoboTurk demonstration, Zhu made a mistake in judgment and pressed a control too hard. Bender the robot proceeded to drop the ball.
The Stanford researchers added both the successful and unsuccessful outcomes of RoboTurk to SURREAL's database of learning experiences. Bender and the other robots can draw from this expanding pool of background knowledge if they ever perform the same experiment again.
Using SURREAL's parallel learning system, a robot can go through thousands of simulations at the same time. The ability to view multiple simulations all at once saves a lot of time.
Zhu and his teammates are sure that robots will become an important part of daily life for humans. The machines are expected to take up repetitive tasks that are considered to be too dangerous or too dull for a person, such as harvesting crops.
"You shouldn't have to tell the robot to twist its arm 20 degrees and inch forward 10 centimeters," Zhu said. "You want to be able to tell the robot to go to the kitchen and get an apple."
Robotics.news can tell you more about the ways by which robots are learning to do the same things as humans.
Sources include: