The advent of multicore processors as the
standard computing platform will force major
changes in software design.
By niR shaVit
“MultiCORe PROCeSSORS aRe about to revolutionize
the way we design and use data structures.”
You might be skeptical of this statement; after
all, are multicore processors not a new class of
multiprocessor machines running parallel programs,
just as we have been doing for more than a quarter
of a century?
The answer is no. The revolution is partly due to
changes multicore processors introduce to parallel
architectures; but mostly it is the result of the change
in the applications that are being parallelized:
multicore processors are bringing parallelism to
Before the introduction of multicore processors,
parallelism was largely dedicated to computational
problems with regular, slow-changing
(or even static) communication and
coordination patterns. Such problems
arise in scientific computing or in
graphics, but rarely in systems.
The future promises us multiple
cores on anything from phones to laptops, desktops, and servers, and therefore a plethora of applications characterized by complex, fast-changing
interactions and data exchanges.
Why are these dynamic interactions
and data exchanges a problem? The
formula we need in order to answer this
question is called Amdahl’s Law. It captures the idea that the extent to which
we can speed up any complex computation is limited by how much of the computation must be executed sequentially.
Define the speedup S of a computation to be the ratio between the time
it takes one processor to complete the
computation (as measured by a wall
clock) versus the time it takes n concurrent processors to complete the same
computation. Amdahl’s Law characterizes the maximum speedup S that can
be achieved by n processors collaborating on an application, where p is the
fraction of the computation that can be
executed in parallel. Assume, for simplicity, that it takes (normalized) time
1 for a single processor to complete the
computation. With n concurrent processors, the parallel part takes time p/n,
and the sequential part takes time 1− p.
Overall, the parallelized computation
takes time 1− p + p n . Amdahl’s Law says
the speedup, that is, the ratio between
We are experiencing a fundamental shift
in the properties required of concurrent
data structures and of the algorithms at
the core of their implementation.
the data structures of our childhood—
stacks, queues, and heaps—will
soon disappear, replaced by looser
“unordered” concurrent constructs
based on distribution and randomization.
future software engineers will need
to learn how to program using these
novel structures, understanding
their performance benefits and their
IllustratIon by anDy gIlmore