Teaching Parallel
A Roundtable Discussion
In this roundtable, three professors of parallel programming share
their perspective on teaching and learning the computing technique.
DOI: 10.1145/1836543.1836553
Programming systems with multiple computational units and systems that are distributed across different places is becoming increasingly common and important. We spoke with three prominent educators who teach not only parallel programming, but parallel thinking, too. William Gropp is a professor at the University of Illinois
Urbana-Champaign and the author of several books on using message passing interfaces to
program distributed systems. John Mellor-Crummey is a professor at Rice University and a
co-winner of the 2006 Dijkstra Prize in Distributed Computing for his work on algorithms on
shared-memory multiprocessors. And Maurice Herlihy is a professor at Brown University, as
well as a co-winner of the 2004 Gödel Prize for his contributions to the theory of distributed
computing.
XRDS: You’re all involved in some way in
both research and educational aspects
of parallel programming. What led you to
be concerned in this aspect of computer
science research and this aspect of
computer science education?
WILLIAM GROPP: My interest in parallel
programming began when I was a
graduate student in the early 1980s
at Stanford. I was interested in what’s
often called scientific computing, and
parallelism was both a natural approach
and offered the possibility of a rapid
increase in the computational power
available to solve large-scale problems.
My focus since then has been
on finding good ways to make use
of computing to solve large-scale,
computationally intensive problems, and
thus my interest has been in effective
ways of achieving high performance,
whether from more effective, adaptive
numerical algorithms, through better use
of individual computational elements, or
scalable parallelism.
JOHN MELLOR-CRUMME Y: I was a
graduate student at the University of
Rochester (UR) in the mid 1980s. At
the time, there was a lot of excitement
and activity at UR in parallel computing
focused on UR’s 128-processor BBN
Butterfly, an early large distributed
shared memory system. For my
dissertation work, I developed tools
and techniques for debugging and
performance analysis of parallel systems.
My work in parallel computing
continued when I joined Rice University
in 1989 to work with Ken Kennedy in
the Center for Research on Parallel
Computation (CRPC), an NSF-funded
Science and Technology Center. The
mission of the CRPC was to “make
parallel computing truly usable.” Upon
arriving at Rice, I got engaged in the
CRPC research portfolio, which included
work on compilers and tools for parallel
computing.
Some of my earliest work at Rice was
on the problem of detecting data races
in shared memory parallel programs
John Mellor-Crummey
“Adding parallelism
to software is the key
to improving code
performance for
future generations
of chips.”