IN 2002, THE U.S. Defense Advanced Research Projects
Agency (DARPA) launched a major initiative in high-productivity computing systems (HPCS). The program
was motivated by the belief that the utilization of the
coming generation of parallel machines was gated
by the difficulty of writing, debugging, tuning, and
maintaining software at peta scale.
As part of this initiative, DARPA encouraged work
on new programming languages, runtimes, and
tools. It believed by making the expression of parallel
constructs easier, matching the runtime models to
the heterogeneous processor architectures under
development, and providing powerful integrated
development tools, that programmer
productivity might improve. This is a
reasonable conjecture, but we sought
to go beyond conjecture to actual mea-
surements of productivity gains.
While there is no established method for measuring programmer productivity, it is clear a productivity metric
must take the form of a ratio: programming results achieved over the cost of
attaining them. In this case, results
are defined as successfully creating a
set of parallel programs that ran correctly on two workstation cores. This
is a long way from peta scale, but since
new parallel software often starts out
this way (and is then scaled and tuned
on ever larger numbers of processors),
we viewed it as a reasonable approximation. Moreover, results found with
two cores should be of interest to those
coding nearly any parallel application,
no matter how small. Cost was simpler
to handle once results was defined,
since it could reasonably be approximated by the time it took to create this
set of parallel programs.
The purpose of this study was to
measure programmer productivity,
thus defined, over the better part of the
decade starting in 2002, the beginning
of the HPCS initiative. The comparison was primarily focused on two approaches to parallel programming: the
SPMD (single program multiple data)
model as exemplified by C/MPI (
message-passing interface), and the APGAS
(asynchronous partitioned global address space) model supported by new
languages such as X10 (http://x10-lang.
org), although differences in environment and tooling were also studied.
Note that the comparison was not between C/MPI as it has come to be with
X10 as it is now. Rather, it was a historical contrast of the way things were
in 2002 with the way things are now.
Indeed, C++ with its exceptions and
MPI- 2 with its one-sided communication protocol likely enhance programmer productivity and are worthy of
study in their own right.
Given our objective, we sought to
replicate as closely as possible the
Article development led by
Looking at the design and benefits of X10.
BY JOHN T. RICHARDS, JONATHAN BREZIN, CALVIN B. SWART,
AND CHRISTINE A. HALVERSON