ple threads. In C++ 11, you would write
atomic<int> instead (volatile
means something subtly different in
C or C++).
Compilers treat synchronization
variables specially, so our basic programming model is preserved. If there
are no data races, threads still behave
as though they execute in an interleaved fashion. Accessing a synchronization variable is a synchronization
operation, however; code sequences
extending across such accesses no
longer appear indivisible.
Synchronization variables are
sometimes the right tool for very simple shared data, such as the done flag
in Figure 2. The only data race here
is on the done flag, so simply declaring that as a synchronization variable
fixes the problem.
Remember, however, that synchronization variables are difficult to use
for complex data structures, since
there is no easy way to make multiple
updates to a data structure in one
atomic operation. Synchronization
variables are not replacements for
In cases such as that shown in Figure 2, synchronization variables often
avoid most of the locking overhead.
Since they are still too expensive, both
C++ 11 and Java provide some explicit
experts-only mechanisms that allow
you to relax the interleaving-based
model, as mentioned before. Unlike
programming with data races, it is
possible to write correct code that
uses these mechanisms, but our experience is that few people actually
get this right. Our hope is that future
hardware will reduce the need for it—
and hardware is already getting better
Most real languages fit our basic model. C++ 11 and C11 provide exactly this
model. Data races have “undefined
behavior;” they are errors in the same
sense as an out-of-bounds array access. This is often referred to as
catch-fire semantics for data races (though
we do not know of any cases in which
machines have actually caught fire as
the result of a data race).
Although catch-fire semantics are
sometimes still controversial, they are
hardly new. The Ada 83 and 1995 Posix
thread specifications are less precise,
but took basically the same position.
Toward a future Without evil?
We have discussed how the absence
of data races leads to a simple programming model supported by common languages. There simply does
not appear to be any other reasonable
alternative. 1 Unfortunately, one sticky
problem remains: guaranteeing
data-race-freedom is still difficult. Large
programs almost always contain bugs,
and often those bugs are data races.
Today’s popular languages do not provide any usable semantics to such programs, making debugging difficult.
Looking forward, it is imperative
that we develop automated tech-
niques that detect or eliminate data
races. Indeed, there is significant
recent progress on several fronts:
dynamic precise detection of data
races; 5, 6 hardware support to raise an
exception on a data race; 7 and lan-
guage-based annotations to eliminate
data races from programs by design. 3
These techniques guarantee that the
considered execution or program has
no data race (allowing the use of the
simple model), but they still require
more research to be commercially vi-
able. Commercial products that de-
tect data races have begun to appear
(for example, Intel Inspector), and
although they do not guarantee data-
race-freedom, they are a big step in
the right direction. We are optimistic
that one way or another, we will (we
must!) conquer evil (data races) in the
Trials and Tribulations
of Debugging Concurrency
Kang Su Gatlin
Scalable Parallel Programming with CUDA
John Nickolls, Ian Buck,
Michael Garland and Kevin Skadron
Building Systems to Be Shared, Securely
Poul-Henning Kamp and Robert Watson
for a more complete set of background references, please
see reference 1.
1. adve, s. V. and boehm, h.-J. memory models: a case
for rethinking parallel languages and hardware.
Commun. ACM 53, 8 (aug. 2010), 90–101.
2. adve, s.V. and gharachorloo, k. shared memory
consistency models: a tutorial. IEEE Computer 29, 12
3. bocchino, r, et al. a type and effect system for
deterministic parallel Java. in Proceedings of
the International Conference on Object-Oriented
Programming, Systems, Languages, and Applications,
4. boehm, h.-J. how to miscompile programs with
“benign” data races. Hot Topics in Parallelism (HotPar),
5. elmas, t., Qadeer, s. and tasiran, s. goldilocks: a
race-aware Java runtime. Commun. ACM 53, 11 (nov.
6. flanagan, c. and freund, s. fasttrack: efficient and
precise dynamic race detection. Commun. ACM 53, 11
(nov. 2010), 93–101.
7. lucia, b., ceze, l., strauss, k., Qadeer, s. and boehm,
h.-J. conflict exceptions: Providing simple concurrent
language semantics with precise hardware exceptions.
in Proceedings of the 2010 International Symposium
on Computer Architecture.
8. sevcik, J. and aspinall, d. on validity of program
transformations in the Java memory model.
in European Conference on Object-oriented
Programming, 2008, 27–51.
hans-J. Boehm is a research manager at hewlett
Packard labs. he is probably best known as the primary
author of a commonly used garbage collection library.
experiences with threads in that project eventually led
him to initiate the effort to properly define threads and
shared variables in c++ 11.
Sarita V. Adve is a professor in the department
of computer science at the university of illinois at
urbana-champaign. her research interests are
in computer architecture and systems, parallel
computing, and power and reliability-aware systems.
she co-developed the memory models for the c++
and Java programming languages, based on her early
work on data-race-free models.
© 2012 acm 0001-0782/12/02 $10.00