TORS TEN HOEFLER ( TH): I would
say MPI’s major strength is due to its
clear organization around a relatively
small number of orthogonal concepts,
including communication contexts
(communicators), blocking/nonblocking,
datatypes, collective communications,
and remote memory access. They work
like Lego blocks and can be combined
into powerful functions. For example,
the transpose of a parallel fast fourier
transformation can be implemented with
a single call to a nonblocking MPI_Alltoall
when using datatypes [ 2]. Big parts of
MPI’s success also stem from its simplicity
to implement libraries and applications.
MPI’s library interface requires no compiler
support to be added to most languages and
is thus easily implemented.
XRDS: What new features should we
expect in the future? Will the standard
have to address any specific challenges
on the way to exascale computing?
TH: I would conjecture that the first
exascale application will use MPI, even if
it’s not extended until then. MPI’s latest,
MPI has long been the de facto
language by which the different processors
in supercomputers communicate, and it is
no overstatement to say current progress
on supercomputers is owed largely to
MPI. But is MPI the right communication
protocol for new disciplines like data
analytics? Should MPI undergo major
change, especially now that scientists
with a limited background in computer
science are also interested in solving
large-scale scientific, business, statistical,
and network problems requiring
supercomputers?
We were thus prompted to contact
Torsten Hoefler to help shed light on
these issues. He is today an assistant
professor of computer science at ETH
Zürich. Before joining ETH, he led the
performance-modeling and simulation
efforts of parallel petascale applications
for the National Science Foundation-
funded Blue Waters project at the
National Center for Supercomputer
Applications at the University of Illinois
IEEE Supercomputing Conference 2013,
and other prestigious conferences. He
has published numerous peer-reviewed
scientific conference and journal articles
and authored chapters of the MPI- 2. 2
and MPI- 3.0 standards. For this work, he
received the SIAM SIAG/Supercomputing
Junior Scientist Prize in 2012 and
the IEEE TCSC Young Achievers in
Scalable Computing Award in 2013.
Following his Ph.D., he received the
Young Alumni Award 2014 from Indiana
University. He was elected onto the
first steering committee of ACM’s
SIGHPC in 2013. His research interests
revolve around “performance-centric
software development,” including
scalable networks, parallel programming
techniques, and performance modeling.
Additional information is available at
Hoefler’s homepage htor.inf.ethz.ch
and http://ppopp17.sigplan.org/
profile/torstenhoefler
The following interview has been
condensed and edited for clarity.
XRDS: MPI began about 25 years ago
and has since become the “king” of HPC.
What characteristics of MPI make it the
de facto language of HPC?
“In general,
integrating
accelerators and
communication
functions is
an interesting
research topic.”