practice
Doi: 10.1145/1400214.1400227
What does the proliferation of concurrency
mean for the software you develop?
BY BRYan cantRiLL anD Jeff Bon WicK
Real-World
concurrency
soFtWArE PrActitionErs todAY could be forgiven if
recent microprocessor developments have given them
some trepidation about the future of software. While
Moore’s Law continues to hold (that is, transistor
density continues to double roughly every 18 months),
due to both intractable physical limitations and
practical engineering considerations, that increasing
density is no longer being spent on boosting clock
rate, but rather on putting multiple CPU cores on a
single CPU die. from the software perspective, this
not a revolutionary shift, but rather an evolutionary
one: multicore CPUs are not the birthing of a new
paradigm, but rather the progression of an old one
(multiprocessing) into more widespread deployment.
from many recent articles and papers on the subject,
however, one might think that this blossoming of
concurrency is the coming of the apocalypse that “the
free lunch is over.” 10
As practitioners who have long been at the coal
face of concurrent systems, we hope to inject some
calm reality (if not some hard-won wisdom) into a
discussion that has too often descended into hysterics.
Specifically, we hope to answer the essential question:
what does the proliferation of concurrency mean for
the software that you develop? Perhaps
regrettably, the answer to that question is neither simple nor universal—
your software’s relationship to concurrency depends on where it physically
executes, where it is in the stack of abstraction, and the business model that
surrounds it. And given that many software projects now have components in
different layers of the abstraction stack
spanning different tiers of the architecture, you may well find that even for
the software that you write, you do not
have one answer but several: some of
your code may be able to be left forever
executing in sequential bliss, and some
of your code may need to be highly parallel and explicitly multithreaded. Further complicating the answer, we will
argue that much of your code will not
fall neatly into either category: it will
be essentially sequential in nature but
will need to be aware of concurrency
at some level. While we will assert that
less—much less—code needs to be parallel than some might fear, it is nonetheless true that writing parallel code
remains something of a black art. We
will also therefore give specific implementation techniques for developing
a highly parallel system. As such, this
article will be somewhat dichotomous:
we will try to both argue that most code
can (and should) achieve concurrency
without explicit parallelism, and at the
same time elucidate techniques for
those who must write explicitly parallel
code. Indeed, this article is half stern
lecture on the merits of abstinence and
half Kama Sutra.
some historical context
Before discussing concurrency with respect to today’s applications, it is helpful to explore the history of concurrent
execution: even by the 1960s—when the
world was still wet with the morning
dew of the computer age—it was becoming clear that a single central processing unit executing a single instruction
stream would result in unnecessarily limited system performance. While
computer designers experimented with
different ideas to circumvent this limi-