ing) are computationally intensive. If
computational speed inhibits adoption of these techniques—and parallel
algorithms exist or can be developed—
then multicore processors can enable
the addition of compelling new functionality to applications.
Multicore processors are not a magic
elixir, just another way to turn additional transistors into more performance. A
problem solved with a multicore computer would also be solvable on a conventional processor—if sequential performance had continued its exponential
increase. Moreover, multicore does not
increase the rate of performance improvement, aside from one-time architectural shifts (such as replacing a single complex processor with a much
larger number of simple cores).
New software features that successfully exploit parallelism differ
from the evolutionary features added
to most software written for conventional uniprocessor-based systems. A
feature may benefit from parallelism
if its computation is large enough to
consume the processor for a significant amount of time, a characteristic
that excludes incremental software
improvements, small but pervasive
software changes, and many simple
Using parallel computation to implement a feature may not speed up an
application as a whole due to Amdahl’s
Law’s strict connection between the
fraction of sequential execution and
sequential computation in the code for
a feature is crucial, because even small
amounts of serial execution can render
a parallel machine ineffective.
An alternative use for multicore
processors is to redesign a sequential
application into a loosely coupled or
asynchronous system in which computations run on separate processors.
This approach uses parallelism to improve software architecture or responsiveness, rather than performance. For
example, it is natural to separate monitoring and introspection features from
program logic. Running these tasks on
a separate processor can reduce perturbation of the mainline computation.
Alternatively, extra processors can perform speculative computations to help
minimize response time. These uses of
parallelism are unlikely to scale with
stop scaling with
they lack sufficient
them, will be
Moore’s Law, but giving an application
(or portions of an application) exclusive access to a set of processors might
produce a more responsive system.
Functionality that does not fit these
patterns will not benefit from multicore; rather, such functionality will remain constrained by the static performance of a single processor. In the best
case, the performance of a processor
may continue to improve at a significantly slower rate (optimistic estimates
range from 10% to 15% per year). But in
some multicore chips, processors will
run slower, as chip vendors simplify
individual cores to lower power consumption and integrate more cores.
For many applications, most functionality is likely to remain sequential.
For software developers to find the resources to add or change features, it may
be necessary to eliminate old features
or reduce their resource consumption.
A paradoxical consequence of multicore is that sequential performance
tuning and code-restructuring tools
are likely to be increasingly important.
Another likely consequence is that software vendors will be more aggressive in
eliminating old or redundant features,
making space for new code.
The regular growth in multicore parallelism poses an additional challenge
to software evolution. Kathy Yelick, a
professor of computer science at the
University of California, Berkeley, has
said that the experience of the high-performance computing community is
that each decimal order of magnitude
increase in parallelism requires a major
redesign and rewrite of parallel code.
Multicore processors are likely to come
into widespread use at the cusp of the
first such change ( 8 → 16); the next one
( 64 → 128) is only three processor generations (six years) later. This observation is relevant only to applications that
use scalable algorithms requiring large
numbers of processors. Applications
that stop scaling with Moore’s Law, because they lack sufficient parallelism
or their developers no longer rewrite
them, are performance dead ends.
Parallelism will also force major
changes in software development.
Moore’s Dividend enabled a shift to
higher-level languages and libraries.
The pressures driving this trend will
not change, because increased abstraction helps improve security, reliability,