problem is to find a way to reap the
benefits of composability for systems
that are necessarily nondeterminate.
There are obvious advantages if the
shared memory of a parallel multiprocessor could be a virtual memory.
Parameters can be passed as pointers
(virtual addresses) without copying
(potentially large) objects. Compilation and programming are greatly
simplified because neither compilers
nor programmers need to manage the
placement of shared objects in the
memory hierarchy of the system; that
is done automatically by the virtual
Virtual memory is essential for
modular composability when modules
can share objects. Any module that
manages the placement of a shared
object in memory violates the information hiding principle because other
modules must consult it before using
a shared object. By hiding object locations from modules, virtual memory
enables composability of parallel program modules.
There are two concerns about large
virtual memory. One is that the virtual
addresses must be large so that they
encompass the entire address space in
which the large computations proceed.
The Multics system demonstrated that
a very large virtual address space—
capable of encompassing the entire file
system—could be implemented efficiently. 8 Capability-based addressing5, 6
can be used to implement a very large
The other concern about large virtual memory pertains to performance.
The locality principle assures us that
Parallelism is not
new; the realization
that it is essential for
each task accesses a limited but dynamically evolving working set of
data objects. 3 The working set is easily detected—it is the objects used in
a recent backward-looking window—
and loaded into a processor’s cache.
There is no reason to be concerned
about performance loss due to an inability to load every task’s working set
into its cache.
What about cache consistency? A
copy of a shared object will be present
in each sharing task’s cache. How do
changes made by one get transmitted
to the other? It would seem that this
problem is exacerbated in a highly parallel system because of the large number of processors and caches.
Here again, the research that was
conducted during the 1970s provides
an answer. We can completely avoid
the cache consistency problem by never writing to shared data. That can be
accomplished by building the memory
as a write-once memory: when a process writes into a shared object, the
system automatically creates a copy
and tags it as the current version.
These value sequences are unique in a
determinate system. Determinate systems, therefore, give a means to completely avoid the cache consistency
problem and successfully run a very
large virtual memory.
Functional programming languages
(such as Haskell and Sisal) currently
support the expression of large classes
of application codes. They guarantee
determinacy and support composability. Extending these languages to
include stream data types would bring
hazard-free expression to computations involving inter-module pipelines
and signal processing. We badly need a
further extension to support programming in the popular object-oriented
style while guaranteeing determinacy
We have means to express nondeterminate computation in self-contained
environments such as interactive
editors, version control systems, and
transaction systems. We sorely need
approaches that can combine determinate and nondeterminate components
into well-structured larger modules.
The full benefits of functional pro-
gramming and composability cannot
be fully realized unless memory man-
agement and thread scheduling are
freely managed at runtime. In the long
run, this will require merging compu-
tational memory and file systems into
a single, global virtual memory.
We can now answer our original questions. Parallelism is not new; the realization that it is essential for continued progress in high-performance
computing is. Parallelism is not yet
a paradigm, but may become so if
enough people adopt it as the standard practice and standard way of
thinking about computation.
The new era of research in parallel
processing can benefit from the results
of the extensive research in the 1960s
and 1970s, avoiding rediscovery of
ideas already documented in the literature: shared memory multiprocessing,
determinacy, functional programming,
and virtual memory.
1. Cann, D. Retire Fortran?: A debate rekindled. Commun.
ACM 35, 8 (Aug. 1992), 81–89.
2. Cann, D. and Feo, J. SISAL versus FoRTRAn:
A comparison using the Livermore loops. In
Proceedings of the 1990 ACM/IEEE Conference on
Supercomputing. IEEE Computer Society Press, 1990.
3. Denning, P. Virtual memory. ACM Computing Surveys
2, 3 (Sept. 1970), 153–189.
4. Denning, P. and Tichy, W. Highly parallel computation.
Science 250 (nov. 30, 1990), 1217–1222.
5. Dennis, J. and Van Horn, E.C. Programming semantics
for multi-programmed computations. Commun. ACM
9, 3 (Mar. 1966), 143–155.
6. Fabry, R. Capability-based addressing. Commun. ACM
17, 7 (July 1974), 403–412.
7. karp, R.M. and Miller, R.E. Properties of a model for
parallel computations: Determinacy, termination,
queueing. SIAM Journal of Applied Mathematics 14, 6
(nov. 1966), 1390–1411.
8. organick, E.I. The Multics System: An Examination of
Its Structure. MI T Press, 1972.
9. organick, E.I. Computer systems organization: The
b5700/b6700. ACM Monograph Series, 1973. LCn:
10. Papadopoulos, g. M. and Culler, D.E. Monsoon: An
explicit token-store architecture. In Proceedings
of the 17th Annual International Symposium on
Computer Architecture. (1990), 82–91.
11. Sarkar, V. and Cann, D. PoSC—A partitioning and
optimizing SISAL compiler. In Proceedings of the 4th
International Conference on Supercomputing. IEEE
Computer Society Press, 1990.
12. Tou, J. and Wegner, P., Eds. Data structures in
programming languages. ACM SIGPLAn notices 6
(Feb. 1971), 171–190. See especially papers by Wegner,
Johnston, berry, and organick.
Peter J. Denning ( firstname.lastname@example.org) is the director of the
Cebrowski Institute for Innovation and Information
Superiority at the naval Postgraduate School in Monterey,
CA, and is a past president of ACM.
Jack B. Dennis ( email@example.com) is Professor
Emeritus of Computer Science and Engineering at MIT
and a Principal Investigator in the Computer Science and
Artificial Intelligence Laboratory. He is an Eckert-Mauchly
Award recipient and a member of the nAE.