By Erin Inman
Imagine taking pictures at a soccer game. Currently photographers have two main options: set the camera to sports mode or manually adjust its settings to create a good shot. But what if you could buy an app for your camera whose sole purpose is to keep the soccer ball in focus?
This camera industry revolution–one that might lead to app stores for cameras–could be made possible by the Stanford Computer Graphics Laboratory’s current research in computational photography.
Computational photography, a term coined by professor of computer science and electrical engineering Marc Levoy, refers to the use of “sensing strategies or algorithmic techniques to enhance or extend the capabilities of digital photography.”
Such enhancements include high dynamic range (HDR), panoramas and all-focus imaging. HDR preserves detail in the brightest and dimmest areas of a photograph, panoramas stitch together a scene only a wide-angle lens could capture and all-focus imaging preserves camera focus at all depths of a scene.
While these enhancements are possible on a computer program such as Photoshop, relying solely on a computer program provides no immediate feedback on the device, according to Andrew Adams Ph.D. ‘11, a lab assistant and graduate student in computer science.
“Perhaps the panorama is missing a corner,” Adams said. “You realize this only after the computer program stitches the shots together.”
Additionally, when taking a burst of pictures, the images are slow to acquire. According to Adams, however, an algorithm in the camera could change camera settings faster than a human hand.
Due to the limitations of immediate feedback and time, the lab creates algorithms capable of producing raw images of similar quality to those of photo editing programs.
Adams presented the example of an indoor party, a scene with not much light. The perfect shot necessitates a fusion between two different shots: one short, low exposure and one long exposure. An algorithm, he said, can take both shots successively and fuse the images into one raw image.
He also gave the example of an individual taking pictures with a long exposure in a setting bound to yield blur. According to Adams, an algorithm could take eight successive shots and save only the least blurred.
Adams noted how cameras, although notorious for being the “most ubiquitous computers out there,” are currently not programming-friendly.
Levoy agreed, adding that researchers outside of the industry are “clueless as to how existing camera modes [night, sports, portrait, etc.] work.”
“The camera industry creates beautiful pictures, but they’re mysterious,” Adams said.
Since no open-source camera existed, the Stanford lab drew on the financial support of Nokia, Adobe Systems, Kodak and Hewlett-Packard, to create the Frakencamera. The camera, featuring Linux operating systems, allows the lab to create and test their own algorithms.
Currently the lab plans to distribute Frakencameras to other universities to “push the API out there and see what people can do,” Adams said.
While Frakencamera serves the purpose of developing algorithms in the research industry, it specifically is not seen as the future of the camera industry. Instead, the lab hopes to “influence industry partners to open their cameras’ interfaces,” Levoy said.
Although computational photography has caught on for open-source consumer cell phone cameras, as evidenced by a look at the iTunes app store, Levoy acknowledged that the camera industry has yet to embrace the technology.
While companies have adopted computational photography tricks such as HDR and panorama modes into closed-source cameras, an open-source camera has yet to be announced, let alone made available to consumers.
Contact Erin Inman at [email protected]