Article development led by
queue.acm.org
Contention for caches, memory controllers,
and interconnects can be eased by
contention-aware scheduling algorithms.
BY ALexAnDRA feDoRoVA, seRGe Y BLAGoDuRoV,
AnD seRGe Y zhuRAVLeV
managing
contention
for shared
Resources
on multicore
Processors
MODErn MUlTICOrE SYSTEMS are designed to allow
clusters of cores to share various hardware structures,
such as LLCs (last-level caches; for example, L2 or
L3), memory controllers, and interconnects, as well
as prefetching hardware. We refer to these resource-sharing clusters as memory domains because the
shared resources mostly have to do
with the memory hierarchy. Figure 1
provides an illustration of a system
with two memory domains and two
cores per domain.
Threads running on cores in the
same memory domain may compete
for the shared resources, and this
contention can significantly degrade
their performance relative to what they
could achieve running in a contention-
free environment. Consider an ex-
ample demonstrating how contention
for shared resources can affect appli-
cation performance. In this example,
four applications—Soplex, Sphinx,
Gamess, and Namd, from the Standard
Performance Evaluation Corporation