Researchers at the Stanford Computational Imaging Lab have created a camera that will enable robots to generate four-dimensional images and capture nearly 140 degrees of information.
Co-creator of the camera Donald Dansereau, a postdoctoral fellow in electrical engineering, said the idea for the camera came from the need for fast information to help robots make quick decisions.
“The problem is, consumer cameras aren’t really well suited to robotics applications,” Dansereau said. “So we asked ourselves, ‘what would be the right camera for a robot that needs to make decisions quickly and with low power?’”
Dansereau said the answer came in the form of a light field camera, which uses an array of microlenses to take in more information between the camera’s sensor and the outer lens. In addition to the image captured by a conventional 2D camera, the newly-developed camera records information about the direction and distance of the light hitting the lens, generating what is known as a 4D image.
“A light-field camera builds an array of many eyes covering an area, like a bee’s eye,” Dansereau said. “[It] is like measuring the light through a window, whereas a normal camera is like looking through a peephole.”
Dansereau also noted the importance of the speed of the camera in informing the robot. He said normal cameras use multistep algorithms, which prolong information gathering.
“Many of the algorithms that are popular in computer vision today are not very compatible with [the speed necessary],” he said. “Even the fast ones are multi-layered: They take a long time to come up with the first answer. The algorithms we use on our light field cameras are one-step algorithms — they’re very fast.”
The lab’s initial goal was to improve robot efficiency, but the camera can also be used in other technology, such as drones. Dansereau said this is especially important as the popularity of autonomous robotics increases.
“We send a drone out into the world to deliver a package [and] it has to deal with the dynamic environment: people moving unpredictably, trees where they are not expected,” Dansereau explained. “Self-driving cars have to deal with cyclists, weather [and] rain. The truth is, just low light will break a lot of computer vision. Having better cameras helps us to better address these challenges.”
Contact Mallika Singh at mallika12 ‘at’ gmail.com.