and expensive lenses that such images
would normally require. The difficulty
in attaining such pixel-rich images lies
not in the sensor as gigapixel sensors
are available today. Rather, the difficulty is in the limits in resolving power,
due to geometric aberrations, inherent
in conventional optics.
Nayar’s lenses, which are spherical
in shape, produce coded output that is
manipulated by algorithms to remove
any aberrations. The most-advanced
lens design has a number of smaller
“relay lenses” positioned on the surface
of a ball lens to focus small portions of
the overall image onto megapixel sensors positioned in a hemispherical array around the lenses. Compared with
other gigapixel cameras, this camera
is small, less than 10 cm. in diameter,
and relatively inexpensive to produce,
Nayar says.
Nayar’s work is partly funded by the
U.S. Defense Advanced Research Projects Agency, and has obvious defense
and security applications. “If I can
recognize your face from a mile away,
that’s a game changer,” says Nayar.
Post-Processing software
Adobe Systems, manufacturer of the
flagship Photoshop image-processing
software, has conducted substantial research in plenoptics, in which multiple
images or perspectives are generated
with the single push of the shutter button and then combined. Plenoptics enables the Lytro light field camera’s variable focusing, high dynamic range’s
variable brightness or exposure, and
variable perspective. “All of these can
be done after the fact in software,” says
Bryan O’Neil Hughes, senior product
manager for Photoshop.
Although Adobe does not sell cameras, it developed a lens, similar to an
insect eye with multiple lenses, which
consists of hundreds of glued-together
micro-lenses, each of which can sample incoming light from a different
perspective. With an array of new algorithms and a 60 megapixel sensor, the
Adobe lens was able to achieve variable focus after the fact, as the Lytro
camera does.
Adobe has also developed experi-
mental, computationally intensive
software that can provide “deblurring”
capabilities by working on traditional
images. Part of Adobe’s trick is to distin-
guish blur caused by camera movement
from blur caused by subject movement.
Unlike the plenoptic research involving
multiple images, Adobe’s deblurring
works with a single image.
helping Photographers
Sensors, insect-eye lenses, lightning-fast exposures, and smart algorithms
have provided significant advantages,
but why not help photographers avoid
some of the age-old pitfalls in shooting? That is the approach taken by computer scientist Stephen Brewster and
colleagues at the University of Glasgow.
Their work centers on the cameras in
Android smartphones because they are
more advanced than conventional digital cameras in many ways and because
of their open architecture, fast processors, and increasing popularity.
One of Brewster’s experimental
cameras uses a smartphone’s acceler-
ometer to warn the photographer that
the camera is moving too much to get a
sharp picture. It does this by displaying
a warning in the camera’s image dis-
play or, for people who are holding the
phone away from their head, by an au-
dible or vibrational cue. And for users
who do not understand the luminance
histogram that appears on the rear of
many digital cameras, Brewster’s team
has created a camera that emits a low
tone when an image is underexposed
and a high tone when it is overexposed.
Another set of techniques use the face-
detection software built into Android
smartphones to help the photographer
improve composition.
image Viewing and Retrieval
The Lytro light field camera is about
getting objects in focus, but it is also
about giving photographers multiple
choices long after the picture is taken.
Microsoft and others are taking that
trend further, partly to overcome limitations in current displays. The average
computer monitor lacks the contrast
ratio and resolution to properly view
images with billions of pixels and a very
large range of luminance.
Microsoft’s HD View software allows smooth panning and zooming
across gigapixel images, including
panoramas. It also adjusts the dynamic range of the portion of the photo
being viewed by the user to the much
more limited luminance range of his
or her monitor so that, for example,
the viewer can see good detail when
looking at a bright sky, but also when
looking into the dark shadow under a
tree, when both details appear in the
same photograph.
Empowering users after a photo
is taken is at the core of the research
done by Carnegie Mellon’s Alexei
Efros. His remote cloning technique
works surprisingly well because photographers tend to shoot similar
things over and over, he says. But,