of-a-feather session with academic and
industrial representatives who agreed to
explore a more formal process through
a group that would come to be known as
the High Performance Fortran Forum
(HPFF), with Kennedy serving as chair
and Charles Koelbel as executive director. With support from the Center for
Parallel Computation Research (CRPC)
at Rice University, a meeting with nearly
100 participants organized in Houston
in January 1992 concluded with a business session in which more than 20
companies committed to a process for
drafting the new standard.
They agreed the process should produce a result in approximately one year.
It turned out this tight schedule would
affect the language, leading to adoption
of the rule that HPF would include only
features that had been demonstrated
in at least one language and compiler,
including research compilers. This limited some of the features that could be
considered, particularly in the realm of
advanced data distributions.
The 30 to 40 active HPFF participants then met for two days every six
weeks or so, mostly in a hotel in Dallas.
Besides Kennedy and Koelbel, the participants serving as editors of the standard document included Marina Chen,
then at Yale, Bob Knighten, then at Intel, David Loveman, then at DEC, Rob
Schreiber, then at NASA, Marc Snir,
then at IBM, Guy Steele, then at Thinking Machines, Joel Williamson, then at
Convex, and Mary Zosel, then at Lawrence Livermore National Laboratory.
Attendees represented five government
labs, 12 universities, and 18 companies
(vendors and users); they also hailed
from at least five countries. HPFF was
a consensus-building process, not a
single-organization project.
It dealt with numerous difficult tech-
nical and political issues, many due to
the tension between the need for high-
level language functionality supporting
a broad range of applications and the
need to generate efficient target code.
They were compounded by limited ex-
perience with the compilation of data-
parallel languages. The research com-
pilers were mostly academic prototypes
that had been applied to few applica-
tions of any size. On the other hand,
the industrial-strength language CMF
was relatively new and lacked some
advanced features of the research lan-
guages. Many decisions thus had to be
made without a full understanding of
their effect on compiler complexity.
hPf Language
The goals established for HPF were fair-
ly straightforward: provide a high-level,
portable programming model for scal-
able computer systems based (primari-
ly) on data-parallel operations in a (con-
ceptually) shared memory and produce
code with performance comparable to
the best hand-coded native language
code on a given machine. To achieve
them, HPF 1.0 defined a language with
several novel characteristics. For sim-
plicity, we do not discuss the enhance-
ments in HPF 1. 1 or HPF 2.0, though
they were in much the same vein.
REAL A(1000,1000), B(1000,1000)
Now suppose we want to split the first
dimension (and with it, the computation over that dimension) across the
processors of a parallel machine. Moreover, suppose because corresponding
elements of A and B are often accessed
together, they should always have the
same distribution. Both effects can be
accomplished through the directives
!HPF$ DISTRIBUTE A(BLOCK,*)
!HPF$ ALIGN B(I,J) WITH A(I,J)
HPF also provides the CYCLIC distribution, in which elements are assigned to processors in round-robin
fashion, and CYCLIC(K), in which
blocks of K elements are assigned
round-robin to processors. Generally
speaking, BLOCK is the preferred distribution for computations with nearest-neighbor elementwise communication, whereas the CYCLIC variants