figure 2. bridge analogy connecting users to a parallel it industry, inspired by the view of the Golden Gate bridge from berkeley, CA.
resistant to change. One estimate is
that it takes a decade for a new compiler optimization to become part of production compilers. How can researchers innovate rapidly if compilers and
operating systems evolve so glacially?
A final challenge is how to measure
improvement in parallel programming languages. The history of these
languages largely reflects researchers
deciding what they think would be
better and then building it for others to try. As humans write programs,
we wonder whether human psychology and human-subject experiments
shouldn’t be allowed to play a larger
role in this revolution. 17
illustration By leonello calvetti
Applications tower. The goal of research into parallel computing should
be to find compelling applications that
thirst for more computing than is currently available and absorb biennially
increasing number of cores for the next
decade or two. Success does not require
improvement in the performance of
all legacy software. Rather, we need to
create compelling applications that effectively utilize the growing number of
cores while providing software environments that ensure that legacy code still
works with acceptable performance.
Note that the notion of “better”
is not defined by only average performance; advances could be in, say,
worst-case response time, battery life,
reliability, or security. To save the IT
industry, researchers must demonstrate greater end-user value from an
increasing number of cores.
As a concrete example of the parallel
landscape, we describe Berkeley’s Par
Lab project,a exploring one of many
potential approaches, though we won’t
know for years which of our ideas will
bear fruit. We hope it inspires more
researchers to participate, increasing
the chance of finding a solution before
it’s too late for the IT industry.
Given a five-year project, we project
the state of the field in five to 10 years,
anticipating that IT will be driven to
extremes in size due to the increasing
popularity of software as a service, or
The datacenter is the server. Amazon,
Google, Microsoft, and other major IT
vendors are racing to construct build-
a In March 2007, Intel and Microsoft invited 25
universities to propose five-year centers for
parallel computing research; the Berkeley and
Illinois efforts were ranked first and second.
ings with 50,000 or more servers to run
SaaS, inspiring the new catchphrase
“cloud computing.”b They have also begun renting thousands of machines by
the hour to enable smaller companies
to benefit from cloud computing. We
expect these trends to accelerate; and
The mobile device (laptops and handhelds) is the client. In 2007, Hewlett-Packard, the largest maker of PCs,
shipped more laptops than desktops.
Millions of cellphones are shipped each
day with ever-increasing functionality, a
trend we expect to accelerate as well.
Surprisingly, these extremes in
computing share many characteristics. Both concern power and energy—
the datacenter due to the cost of power
and cooling and the mobile client due
to battery life. Both concern cost—
the datacenter because server cost is
replicated 50,000 times and mobile
clients because of a lower unit-price
target. Finally, the software stacks are
becoming similar, with more layers for
mobile clients and increasing concern
about protection and security.
b See Ambrust, M. et al. Above the Clouds:
A Berkeley View of Cloud Computing.
University of California, Berkeley, Technical Report