academia is
debating when and
how to add parallel
programming to the
curriculum, instead
of only offering an
upper-level course
as is now common.
imum set of code changes required to
exploit that parallelism. “Put another
way,” Stewart says, “how much does
Amdahl’s Law screw me up?”
a firm ceiling
For years, the promise of parallel computing has run afoul of the harsh reality
of an axiom posited by computer architect Gene Amdahl in 1967. Amdahl’s
Law puts a firm ceiling on the benefit
of converting code from sequential to
parallel. It states that the speedup of
an application using multiple processors in parallel computing is limited by
the time needed for the sequential fraction of the program. The upshot: going
down the path of parallelism will not
necessarily reap rewards.
“If 50% of your program is serial and
the other half can be parallelized, the
biggest speedup you’re going to see is
a factor of two,” says Microsoft’s Larus.
“It doesn’t matter how many cores you
have. And that doesn’t seem very compelling if you’re going to have to rewrite
a huge amount of software.” Amdahl’s
Law, Larus says, might very well mean
that a wholesale rush to convert serial
applications to parallel platforms in
order to preserve a Moore’s Law pace
of progress would be misguided. In
many cases, it will be more cost-effective to improve serial applications’
performance via algorithmic advances
and custom circuitry rather than going
for the marginal return on investment
that parallelizing those applications
might provide.
For Larus, reconciling the true computational needs of future applications
with the overall move to parallel-capa-
ble multicore processors must entail
rigorous evaluation of what type of application might deliver the most benefit to users. Because the bulk of general-purpose computing has been done
successfully on serial platforms, he says
it’s been difficult to pin down exactly
which applications might derive the
most benefit from parallelization.
“This is a challenging problem,” he
says. “A lot of people take the attitude,
‘If we build it they will come,’ and that
may very well happen—there might
be a killer app. But not knowing what
that is makes it really hard to build the
infrastructure and the tools to facilitate the app.”
So far, Larus says, the computer engineering community is basing its idea of
what future general-purpose multicore
platforms are capable of due to niche
applications such as high-performance
scientific data analysis software, where
parallelization has shown its value. But
there’s no guarantee it will be possible
to extrapolate from this experience to
create the development frameworks
most programmers will need as parallelism goes mainstream in general-purpose computing.
“In this case we’re going backwards;
we’re building the tools based on our experience with high-performance computing or scientific computing and saying people are going to need this. And
that may be true, but it has a funny feel
to me—to have the tools leading.”
Larus says the key to successful parallel platforms might be in finding a
way to combine existing discrete serial
platforms. One example, he says, might
be a virtual receptionist that needs
to process visual cues from a camera
taking images of a visitor while also
responding to spoken queries such as
the visitor’s request for directions to a
nearby restaurant. “This type of thing
has been gradually building up in discrete fields over the years,” Larus says.
“When you put it together there are all
these big, independent pieces that only
interact at these well-defined boundaries. That’s a problem that’s actually
easy to parallelize.”
Gregory Goth is an oakville, CT-based writer who
specializes in science and technology. David A. Patterson,
University of California, Berkeley, contributed to the
development of this article.
© 2009 ACM 0001-0782/09/0900 $10.00
Milestones
Computer
Science
Awards
president Barack obama and
the national science Foundation
(nsF) recently honored members
of the computer science
community for their innovative
research. among them:
NSf caReeR a WaRD
tiffany Barnes, an assistant professor in the department of computer science at the University of north
Carolina at Charlotte, has
received a Career award from
the nsF for her research on
artificial intelligence and
education. the goal of Barnes’
project is to create technology
for a new generation of
data-driven intelligent tutors,
enabling the rapid creation of
individualized instruction to
support learning in science,
technology, engineering, and
mathematics fields.
PReSiDeNTiaL eaRLY
caReeR a WaRDS
president obama has named
100 beginning researchers as
recipients of the presidential
Early Career awards for
scientists and Engineers
(pECasE), the highest
honor bestowed by the
U.s. government on young
professionals in the early
stages of their independent
research careers.
of the 100 pECasE winners,
15 are computer scientists. they
are: Cecilia r. aragon, Lawrence
Berkeley national Laboratory;
david p. arnold, University of
Florida; seth r. Bank, University
of texas, austin; Joel L. dawson,
Massachusetts institute of
technology; Chris L. dwyer,
duke University; anthony grbic,
University of Michigan; Carlos
E. guestrin, Carnegie Mellon
University; sean Hallgren, penn
state University; yu Huang,
University of California, Los
angeles; gregory H. Huff,
texas a&M University; sanjay
Kumar, University of California,
Berkeley; rada F. Mihalcea,
University of north texas; adam
d. smith, penn state University;
adrienne d. stiff-roberts, duke
University; and sharon M.
weiss, Vanderbilt University.