erything needs to be parallel. What’s it
going to do for you to make Word run a
little faster?”
Larus says some applications, such
as speech recognition, for which parallel programming is seen as a requirement, might benefit more from
algorithmic improvements of existing
serially written applications instead of
converting to parallel processing. “The
people working on [speech recogni-tion] at Microsoft say a machine that’s
10 times faster would probably reduce
the error rate by a few tenths of a percent,” says Larus. “They see the future
in terms of better algorithms, not more
computation. We’re saying we can
keep giving you exponential growth in
compute power for certain types of programs, and people are telling us ‘That’s
not really what we need for what we’re
doing’ or ‘That’s not enough for what
we’re doing.’”
Larus’s cautions aside, the computer industry is moving en masse to multicore machines, and users will expect
to receive additional performance for
their money—performance that will
often depend on parallel applications.
Therefore, some experts say, there is
a danger in not immediately starting
to train programmers on the requirements of parallel programming. This
amdahl’s Law
consciousness raising is important
at all levels, from industry veterans to
undergraduate students. “Sequential
programming models do not work well
enough,” says Maurice Herlihy, a professor of computer science at Brown
University. “We can more or less keep
busy” four cores or fewer, he says, “but
beyond that we’ll have to rethink some
things. If you can’t deliver more value
with more cores, why would anybody
ever upgrade?” Herlihy sees a peril that
the engine of progress that has driven
computer science for decades could
run out of fuel, with dire consequences. “If this were to dissipate, then all
the smart students would go to bioengineering or something, and computer
science as a field would suffer.” Indeed,
he says, “even one generation of stagnation could do lasting damage.”
incremental integration
Computer scientists on university fac-ulties say academia is debating how
and when to introduce parallel programming throughout the curriculum, instead of just offering an upper-level course as is now common. Both
Brown’s Herlihy and Guy Blelloch, a
professor of computer science at Carnegie Mellon University, say the early
emphasis should be on higher-level
20.00
18.00
16.00
14.00
12.00
Speedup
10.00
8.00
Parallel Portion
50%
75%
90%
95%
6.00
4.00
2.00
2
4
8
16
32
64
0.00
1
256
512
1024
2048
4096
8192
16384
32768
65536
128
Number of Processors
Named after computer architect Gene amdahl, amdahl’s Law is frequently used in parallel
programming to predict the theoretical maximum speedup using multiple processors.
parallel concepts and not on coding
particulars such as languages or development frameworks.
Yet without some sort of introduction to the emerging post-collegiate
parallel programming practice and
tools environment, these ne w engineers
might need even more training. Herlihy
says that existing parallel frameworks—
such as OpenMP, which dates to 1997,
and the newly released OpenCL—are
well suited for professional programmers, but not for students, who largely
program in Java. This lack of grounding
in the fundamentals of parallel code
writing could lead to a looming disconnect between what new programmers
know and what industry needs.
Intel’s Mattson, who worked on
both frameworks, says one of the major
blind spots of both OpenMP and OpenCL is a lack of support for managed
languages such as Java. He also says
the idea that there may be some type
of revolutionary parallel programming
language or approach on the near-term
horizon that solves the multicore conundrum is misplaced. “Programmers
insist on incremental approaches,”
Mattson says. “They have a huge base
of code written in established languages they will not throw away to adopt a
whole new language, and they have to
be able to incrementally evolve this legacy of code into the parallel space.”
The good news is that tools to assist
programmers in this task of incrementally parallelizing code are proliferating on the market. Examples include
Cilk++ from Cilk Arts, a Burlington,
MA, company that extends the work
of the Cilk Project at MIT. Cilk++ allows parallel programs written in C++
to retain serial semantics, which in
turn permits programmers to use serial methodologies. CriticalBlue, an
Edinburgh, Scotland-based company,
recently released Prism, a parallel analysis and coding tool that CEO David
Stewart says works with C or C++ and
that allows users to explore parallelization strategies—which pieces to run in
parallel, which dependencies to break,
how many cores to use, and so on—
before touching the code.
The most sensible way to implement parallelism, Stewart contends, is
to enable software developers to analyze how much potential parallelism is
in their code and to determine the min-