review articles
Doi: 10.1145/1364782.1364800
Is TM the answer for improving
parallel programming?
BY JameS LaRuS anD chRiStoS Koz YRaKiS
transactional
memory
AS COMPUTERS EVOLVE, programming changes as
well. the past few years mark the start of a historic
transition from sequential to parallel computation
in the processors used in most personal, server,
and mobile computers. this shift marks the end of
a remarkable 30-year period in which advances in
semiconductor technology and computer architecture
improved the performance of sequential processors at
an annual rate of 40%–50%. this steady performance
increase benefited all software, and this progress was
a key factor driving the spread of software throughout
modern life.
this remarkable era stopped when practical limits
on the power dissipation of a chip ended the continual
increases in clock speed and limited instruction-level
parallelism diminished the benefit of increasingly
complex processor architectures. The
era did not stop because Moore’s Lawa
ended. Semiconductor technology is
still capable of doubling the transistors
on a chip every two years. However, this
flood of transistors now increases the
number of independent processors on
a chip, rather than making an individual processor run faster. The resulting
computer architecture, named Multi-core, consists of several independent
processors (cores) on a chip that communicate through shared memory. Today, two-core chips are common and
four-core chips are coming to market,
and there is every reason to believe that
the number of cores will continue to
double for a number of generations.
On one hand, the good news is that the
peak performance of a Multicore computer doubles each time the number
of cores doubles. On the other hand,
achieving this performance requires a
program execute in parallel and scale
as the number of processors increase.
Few programs today are written to
exploit parallelism effectively. In part,
most programmers did not have access
to parallel computers, which were limited to domains with large, naturally
parallel workloads, such as servers, or
huge computations, such as high-performance computing. Because mainstream programming was sequential
programming, most existing programming languages, libraries, design patterns, and training do not address the
challenges of parallelism
programming. Obviously, this situation must
change before programmers in general
will start writing parallel programs for
Multicore processors.
A primary challenge is to find better abstractions for expressing parallel computation and for writing parallel programs. Parallel programming
encompasses all of the difficulties of
sequential programming, but also introduces the hard problem of coordinating interactions among concurrently executing tasks. Today, most parallel
a. The doubling every 18– 24 months of the number of transistors fabricable on a chip.