Shree Nayar, a professor of computer
science at Columbia University. “The
game becomes interesting when you
think about optics and computation
within the same framework, designing optics for the computations and
designing computations to support
a certain type of optics.” That line of
thinking, Nayar says, has been evolving in both fields, optics on one side
and computer vision and graphics on
the other.
multiple Images
One of the most visually striking examples of computational photography
is high dynamic range (HDR) imaging,
a technique that involves using photo-editing software to stitch together multiple images taken at different exposures.
HDR images—many of which have a
vibrant, surreal quality—are almost certain to elicit some degree of surprise in
those who aren’t yet familiar with the
technique. But the traditional process of
creating a single HDR image is painstaking and cannot be accomplished with
moving scenes because of the requirement to take multiple images at different exposures. In addition to being inconvenient, traditional HDR techniques
require expertise beyond the ability or
interest of casual photographers unfamiliar with photo-editing software.
However, those working in computational photography have been looking
for innovative ways not only to eliminate
the time it takes to create an HDR image,
but also to sidestep the learning curve
associated with the technique.
With computational
photography,
people can change
a camera’s focal
settings after
a photo is taken.
“It turns out that you can do HDR
with a single picture,” says Nayar. “
Instead of all pixels having equal sensitivity on your detector, imagine that
neighboring pixels have different sunshades on them—one is completely
open, one is a little bit dark, one even
darker, and so on.” With this technique, early variations of which have
begun to appear in digital cameras,
such as recent models in the Fujifilm
FinePix line, the multiple exposures
required of an HDR image would be
a seamless operation initiated by the
user with a single button press.
Another research area in computational photography is depth of field. In
traditional photography, if you want a
large depth of field—where everything
in a scene is in focus—the only way to
do so is to make the camera’s aperture
very small, which prevents the camera
from gathering light and causes images to look grainy. Conversely, if you
want a good picture in terms of bright-
a photo containing 62 frames, taken through a group of trees, of Chicago’s newberry Library.
12 CommunICatIons of the aCm | feBRuaRY 2009 | vol. 52 | No. 2
ness and color, then you must open
the camera’s aperture, which results
in a reduced depth of field. Nayar,
whose work involves developing vision sensors and creating algorithms
for scene interpretation, has been able
to extend depth of field without compromising light by moving an image
sensor along a camera’s optical axis.
“Essentially what you are doing is that
while the image is being captured, you
are sweeping the focus plane through
the scene,” he explains. “And what you
end up with is, of course, a picture that
is blurred, but is equally blurred everywhere.” Applying a deconvolution
algorithm to the blurred image can recover a picture that Nayar says doesn’t
compromise the quality of the image.
One of the major issues that those
working in computational photography face is testing their developments
on real-world cameras. With few exceptions, the majority of researchers
working in this area generally don’t
take apart cameras or try to make
their own, which means most research teams are limited to what they
can do with existing cameras and a sequence of images. “It would be nicer if
they could program the camera,” says
Marc Levoy, a professor of computer
science and electrical engineering at
Stanford University. Levoy, whose research involves light-field sensing and
applications of computer graphics
in microscopy and biology, says that
even those researchers who take apart
cameras to build their own typically
do not program them in real time to
do on-the-spot changes for different
autofocus algorithms or different metering algorithms, for example.
“No researchers have really addressed those kind of things because
they don’t have a camera they can play
with,” he says. “The goal of our open-source camera project is to open a new
angle in computational photography
by providing researchers and students
with a camera they can program.” Of
course, cameras do have computer
software in them now, but the vast majority of the software is not available to
researchers. “It’s a highly competitive,
IP-protected, insular industry,” says
Levoy. “And we’d like to open it up.”
But in developing a programmable
platform for researchers in computational photography, Levoy faces several
PhotograPh by Mike Warot