major challenges. He says the biggest
problem is trying to compile code written in an easy-to-use, high-level language down to the hardware of a camera. He likens the challenge to the early
days of shading languages and graphics
chips. Shaders used to be fixed functions that programmers could manipulate in only certain limited ways. However, graphics chipmakers changed
their hardware to accommodate new
programming languages. As a result,
says Levoy, the shader languages got
better in a virtuous cycle that resulted
in several languages that can now be
used to control the hardware of a graphics chip at a very fine scale.
“We’d like to do the same thing
with cameras,” Levoy says. “We’d like
to allow a researcher to program a
camera in a high-level language, like
C++ or Python. I think we should have
something in a year or two.”
In addition to developing an open
source platform for computational
photography, Levoy is working on applying some of his research in this
area to microscopy. One of his projects embeds a microlens array in a microscope with the goal of being able
to refocus photographs after they are
taken. Because of the presence of multiple microlenses, the technology allows for slightly shifting the viewpoint
to see around the sides of objects—
even after capturing an image. With
a traditional microscope, you can of
course refocus on an object over time
by moving the microscope’s stage up
and down. But if you are trying to capture an object that is moving or a microscopic event that is happening very
quickly, being able to take only a single
shot before the scene changes is a serious drawback. In addition to Levoy’s
microlens array allowing images to be
refocused after they are captured, the
technology also offers the ability to
render objects in three dimensions.
“Because I can take a single snapshot
and refocus up and down,” he says, “I
can get three-dimensional information from a single instant in time.”
These and other developments in
computational photography are leading to a vast array of new options for researchers, industry, and photography
enthusiasts. But as with any advanced
technologies that have interfaces designed to be used by humans, one of
marc Levoy’s
open-source camera
project might
enable researchers
to program
a camera in
a high-level language,
like C++ or Python.
the major challenges for computational photography is usability. While computer chips can do the heavy lifting for
many of these new developments, the
perception that users must work with
multiple images or limitless settings to
generate a good photo might be a difficult barrier to overcome. Levoy points
to the Casio EX-F1 camera as a positive
step toward solving this usability problem. He says the EX-F1, which he calls
the first computational camera, is a
game changer. “With this camera, you
can take a picture in a dark room and
it will take a burst of photos, then align
and merge them together to produce a
single photograph that is not as noisy
as it would be if you took a photograph
without flash in a dark room,” he says.
“There is relatively little extra load on
the person.”
Levoy predicts that cameras will follow the path of mobile phones, which,
for some people, have obviated the
need for a computer. “There are going
to be a lot of people with digital cameras who don’t have a computer and
never will,” he says. “Addressing that
community is going to be interesting.” He also predicts that high-end
cameras will have amazing flexibility and point-and-shoot cameras will
take much better pictures than they
do now. Nayar is of a similar opinion.
“One would ultimately try to develop a
camera where you can press a button,
take a picture, and do any number of
things with it,” he says, “almost like
you’re giving the optics new life.”
based in los angeles, Kirk L. Kroeker is a freelance
editor and writer specializing in science and technology.
Cognitive Computing
IBM’s
Brain-Like
Computer
ibM has received a $4.9 million
grant from DaRPa to lead an
ambitious, cross-disciplinary
research project to create a new
computing platform: electronic
circuits that operate like a
brain.
“the mind has an amazing
ability to integrate ambiguous
information across the senses,
and it can effortlessly create
the categories of time, space,
object, and interrelationship
from the sensory data,”
Dharmendra Modha, the ibM
researcher who is leading the
project, told bbc news. “the
key idea of cognitive computing
is to engineer mind-like
intelligent machines by reverse
engineering the structure,
dynamics, function, and
behavior of the brain.”
the cognitive computing
initiative will include
neuroscientists, computer
and materials scientists, and
psychologists from ibM and
five U.S. universities. Modha
believes the time is right for
the project as neuroscientists
have learned a great deal about
the inner workings of neurons
and their connecting synapses,
resulting in “wiring diagrams”
for the brains of simple
animals. also, supercomputing
is able to simulate brains
up to the complexity level of
small mammals; last year a
Modha-led ibM team used its
blue gene supercomputer to
simulate the 55 million neurons
and half-a-trillion synapses in
a mouse’s brain. “but the real
challenge is then to manifest
what will be learned from future
simulations into real electronic
devices—nanotechnology,”
according to Modha.
along with ibM almaden
Research center and ibM t.
J. Watson Research center,
Stanford University, University
of Wisconsin-Madison, cornell
University, columbia University
Medical center, and University
of california-Merced are
participating in the project.
Modha acknowledges that
the project is very ambitious in
terms of its scope and goals.
“We are going not just for a
home run,” he said, “but for
a home run with the bases
loaded.”
feBRuaRY 2009 | vol. 52 | No. 2 | CommunICatIons of the aCm
13