Friday, October 13, 2017 by Frances Bloomfield
The ability to see what lies around a corner sounds like a useful yet implausible ability. Yet researchers from the Massachusetts Institute of Technology (MIT) have made that a reality. The team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a new software that could one day confer this superpower to any smartphone user.
Dubbed the “CornerCameras” system, this camera technology works by capturing subtle changes in the light reflected off of living beings or moving objects. To be more precise, it detects what is known as “penumbra” or the faint shadow created by the tiny amounts of light on the ground and in the camera’s line of sight. CornerCameras is then able to put together an image based off of the slight changes of these shadows to determine what lies on the other side of a turn, where it is, and how fast it’s moving, all in real time.
“Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going,” MIT electrical engineer Katherine Bouman explained to the DailyMail.co.uk. “In this way, we show that walls and other obstructions with edges can be exploited as naturally-occurring ‘cameras’ that reveal the hidden scenes beyond them.”
Though there are existing technologies to seeing around obstructions, most of them make use of special lasers. These “time-of-flight” (ToF) cameras work by emitting pulses of light through their 3D depth sensors and then measure the return time of these pulses using sensors. Unfortunately, most ToF cameras command a high price and are unable to function properly in the presence of ambient lighting. By contrast, CornerCameras can work under a variety of lighting conditions as long as it isn’t totally dark. Outdoor tests have even found that the system can perform well in the rain, something that ToF cameras are unable to do.
Bouman elaborated: “Given that the rain was literally changing the color of the ground, I figured that there was no way we’d be able to see subtle differences in light on the order of a tenth of a percent. But because the system integrates so much information across dozens of images, the effect of the raindrops averages out, and so you can see the movement of the objects even in the middle of all that activity.”
However useful CornerCameras seems, it has a number of limitations. Poor lighting will easily throw it off, as will changes to the current lighting condition. The researchers intend on addressing these limitations in the future, especially since they plan on incorporating their technology into automotive collision avoidance systems one day.
“If a little kid darts into the street, a driver might not be able to react in time. While we’re not there yet, a technology like this could one day be used to give drivers a few seconds of warning time and help in a lot of life-or-death situation,” said Bouman. (Related: Google forced to disclose robotic vehicles being involved in traffic accidents.)
To that end, they’ve tested the system while moving it around in a wheelchair. So someday soon, you could be driving around a super-powered car in addition to having an extraordinary smartphone.
Remain updated on CornerCameras by visiting Scientific.news today.