a solid foundation for
x86 shared memory
By Hans-J. Boehm
MULTiThreaded proGraMs ThaT
communicate through shared memory are
pervasive. They originally provided a
convenient way for an application to
perform, for example, a long compute
task while remaining responsive to
an interactive user. Today they are the
most obvious route to using multiple
available processor cores for a single
task, so the user can benefit from the
increased number of available cores.
Unfortunately, a surprising amount
of confusion surrounds the basic rules
obeyed by shared memory. If a variable
is updated by one thread, when will
the new value become visible to another thread? What does it mean, if anything, to have two threads updating
the same variable at the same time?
Do all threads have to see updates in a
The confusion surrounding these
issues has resulted in many intermittent software bugs, often in low-level
libraries that affect large numbers of
applications. On at least one occasion,
it has resulted in a pervasively used,
but usually incorrect, programming
idiom. (Try searching for “
This problem arises at different
levels. At the programming language
level, there must be clear rules for the
programmer’s use of shared variables.
Compilers and low-level libraries must
enforce these rules by relying on corresponding hardware properties for
memory-access instructions—the subject of the following paper.
Most of the early work on shared-
memory semantics focused on the
instruction set level and trade-offs
with hardware optimization. Roughly
concurrently, some older program-
ming language designs, notably Ada
83, made very credible attempts to ad-
dress the language-level issue. Unfor-
tunately none of this prevented major
problems in the most widely used lan-
guages from escaping attention until
very recently. The Java specification
was drastically revised around 2005, 2
and has still not completely settled. 1, 3
Similarly, the C and C++ specifications
are being revised to finally address
similar issues. 1 As a result, hardware
architects often could not have a clear
picture of the programming language
semantics they needed to support,
making a fully satisfactory resolution
of the hardware-level issues more dif-
ficult or impossible.
1. Adve, s. and boehm, h.-j. Memory models: A case for
rethinking parallel languages and hardware. To appear
in Commun. ACM.
2. Manson, j., Pugh, W., and Adve, s. The java memory
model. In Proceedings of the Symposium on Principles
of Programming Languages, 2005.
3. sevcik, j. and Aspinall, D. on validity of program
transformations in the java memory model. In
ECOOP 2008, 27–51.
hans-J. Boehm ( firstname.lastname@example.org) is a member of
hewlett-Packard’s exascale Computing lab, Palo Alto, CA.