ones being the Graph 500 and the HPC
Challenge. Both of them aim to include
a wider set of measures that substantially contribute to the performance of
HPC systems running real-world HPC
applications. When put together, these
benchmarks provide a more holistic
picture of an HPC system. However,
they are still focused only on control
flow computing, rather than on a more
data-centric view that could scale the
relevance of the included measures to
large product applications.
This Viewpoint offers an alternate
road to consider. Again, we do not suggest LINPACK, Graph 500, or HPC Challenge to be abandoned altogether, but
supplemented with another type of
benchmarks: Performance of the systems when used to solve real-life problems, rather than generic benchmarks.
Of course, the question is how to choose
these problems. One option may be to
analyze the TOP500 and/or Graph 500
and/or a list of the most expensive HPC
systems, for example, and to extract a
number of problems that top-ranking
systems have most commonly been used
for. Such a ranking would also be of use
to HPC customers, as they could look at
the list for whatever problem is the most
similar to the problem of their own.
Finally, this type of ranking would
naturally evolve with both the HPC
technology and the demands presented to HPC systems by periodically updating the list. A generic benchmark,
on the other hand, must be designed
either by looking at a current HPC system and its bottlenecks, or typical demands of current HPC problems, or
both. As these two change in time, so
must the benchmarks.
Conclusion
The findings in this Viewpoint are pertinent to those supercomputing users
who wish to minimize not only the purchase costs, but also the maintenance
costs, for a given performance requirement. Also to those manufacturers of
supercomputing-oriented systems who
are able to deliver more for less, but are
using unconventional architectures. 21
Topics for future research include
the ways to incorporate the price/com-plexity issues and also the satisfaction/
profile issues. The ability issues (
availability, reliability, extensibility, partition ability, programmability, portabil-
as we direct efforts
to break the exascale
barrier, we must
ensure the scale itself
is appropriate.
ity, and so forth) are also of importance
for any future ranking efforts.
Whenever a paradigm shift happens
in computer technology, computer architecture, or computer applications,
a new approach has to be introduced.
The same type of thinking happened
at the time when GaAs technology was
introduced for high-radiation environments, and had to be compared with
silicon technology, for a new set of relevant architectural issues. Solutions
that ranked high until that moment
suddenly obtained new and relatively
low-ranking positions. 8
As we direct efforts to break the exascale barrier, we must ensure the scale
itself is appropriate. A scale is needed
that can offer as much meaning as
possible and can translate to real, usable, performance, to the highest possible degree. Such a scale should also
feature the same two properties, even
when applied to unconventional computational approaches.
References
1. anderson, m. better benchmarking for supercomputers.
IEEE Spectrum 48, 1 (jan. 2011), 12–14.
2. Dongarra, j., meuer, h., and strohmaier, e. toP500
supercomputer sites; http://www.netlib.org/
benchmark/ top500.html.
3. Dosanjh, s. et al. achieving exascale computing
through hardware/software co-design. In
Proceedings of the 18th European MPI Users’ Group
Conference on Recent Advances in the Message
Passing Interface (euromPI’ 11), springer-verlag,
berlin, heidelberg, (2011), 5–7.
4. faulk, s. et al. l. measuring high performance
computing productivity. International Journal of High
Performance Computing Applications 18 ( Winter
2004), 459–473; DoI: 10.1177/1094342004048539.
5. gahvari, h. et al. benchmarking sparse matrix-vector
multiply in five minutes. In Proceedings of the SPEC
Benchmark Workshop (jan. 2007).
6. geller, t. supercomputing’s exaflop target.
Commun. ACM 54, 8 (aug. 2011), 16–18; DoI:
10.1145/1978542.1978549.
7. gioiosa, r. towards sustainable exascale computing.
In Proceedings of the VLSI System on Chip Conference
(vlsI-soc), 18th Ieee/IfIP (2010), 270–275.
8. helbig, W. and milutinovic, v. the rca’s Dcfl e/D
mesfet gaas 32-bit experimental rIsc machine.
IEEE Transactions on Computers 36, 2 (feb. 1989),
263–274.
9. kepner, j. hPc productivity: an overarching view.
International Journal of High Performance Computing
Applications 18 (Winter 2004), 393–397; DoI:
10.1177/1094342004048533
Michael J. Flynn ( flynn@ee.stanford.edu) is a professor
of electrical engineering at stanford university, ca.
Oskar Mencer ( o.mencer@imperial.ac.uk) is a senior
lecturer in the department of computing at Imperial
college, london, u.k.
Veljko Milutinovic (vm@etf.rs) is a professor in the
department of computer engineering at the university of
belgrade, serbia.
Goran Rakocevic ( grakocevic@gmail.com) is a research
assistant at the mathematical Institute of the serbian
academy of sciences and arts in belgrade, serbia.
Per Stenstrom ( pers@chalmers.se) is a professor
of computer engineering at chalmers university of
technology, sweden.
Roman Trobec ( roman.trobec@ijs.si) is an associate
professor at the jožef stefan Institute, slovenia.
Mateo Valero ( mateo.valero@bsc.es) is the director of the
barcelona supercomputing centre, spain.
this research was supported by discussions at the
barcelona supercomputing centre, during the fP7
eesI final Project meeting. the strategic framework
for this work was inspired by robert madelin and mario
campolargo of the ec, and was presented in the keynote
of the eesI final Project meeting. the work of v.
milutinovic and g. rakocevic was partially supported by
the iii44006 grant of the serbian ministry of science.