The goal would be nonbrittle software modules that
plug together and just work, in the remarkable way our own flesh repairs
itself when insulted.
formalisms have been developed to understand distributed computing and asynchronous computation,
but none has become as established or been able to
yield such universal results as asymptotic complexity
analysis.
That lack of progress is sure to change over the
next 50 years. New formalisms will let us analyze
complex distributed systems, producing new theoretical insights that lead to practical real-world payoffs.
Exactly what the basis for these formalisms will be is,
of course, impossible to guess. My own bet is on
resilience and adaptability. I expect we will gain
insights from these two properties, both almost universal in biological systems. For example, suppose we
start with the question of how to specify a computation and how quickly this particular computation is
likely to diverge if there is a one-bit error in the specification. Or how quickly it will diverge if there is a
one-bit error in the data on which it is acting. This
potential divergence leads to all sorts of questions
about resilience, then to questions about program
encoding and adaptability of software to hostile environments. The goal would be nonbrittle software
modules that plug together and just work, in the
remarkable way our own flesh repairs itself when
insulted and how our bodies adapt to transplantation
of a piece of someone else’s liver. The dream of reliable software may follow from such a theoretical
reconsideration of the nature of computation.
As for the computing machines themselves it is
worth noting that all technology generally seems to
have a “use by” date. Bronze gave way to iron, horses
gave way to automobiles, and more recently analog
television signals finally and belatedly gave way to
digital television, long after digital techniques were
emulating analog, following years and years of back
compatibility. We’re just reaching that stage with
regard to the classical von Neumann architecture for
a single digital computational processor. We have
spent the last few decades maintaining the appearance
of a von Neumann machine with uniformly address-able memory and a single instruction stream, even
though, in the interest of speed, we have had multiple
levels of memories (and caches to hide them) and
many parallel execution units (and pipeline stalls to
hide them).
As the die size of our chips is getting so small that
we cannot make it smaller and maintain the digital
abstraction of what goes on underneath, we have
begun to see the emergence of multi-core chips. And
in traditional computing machinery style, we immediately also see an exponential increase in the number
of cores on each chip. Each of these cores is itself a traditional von Neumann abstraction. The latest debate
is whether to make that whole group of cores appear
as a single von Neumann abstraction or bite the bullet and move beyond von Neumann.
Thus we are currently witnessing the appearance of
fractures in the facade of the von Neumann abstraction. Over the next 50 years we will pass the “use by”
date of this technology and adopt new computational
abstractions for our computing machinery.
The most surprising thing about computation over
the past 50 years has been the radical new applications
that have developed, usually unpredicted, changing
the way we work and live, from spreadsheets to email
to the Web to search engines to social-interaction sites
to the convergence of telephones, cameras, and email
in a single device. Rather than make wildly speculative—and most likely wrong—predictions about
applications, I will point out where a number of
trends are already converging.
A key driver for applications is communication
and social interaction. As a result, wireless networks
are increasingly pervasive and indispensable. A key
medical development over the past decade has been
the implanting of chips that directly communicate
with people’s nervous systems; for example, more
than 50,000 people worldwide now hear through the
miracle of a cochlear implant embedded inside their
heads (running C code, no less). In clinical trials blind
patients are starting to see, just a little, with embedded chips, and quadriplegics have begun to control
their environments by thinking what they want to
happen and having signals generated in their brains
detected and communicated by embedded chips.
Over the next 50 years we will bring computing
machinery inside our bodies and connect ourselves to
information sources and each other through these
devices. A brave new world indeed. c
RODNEY BROOKS ( brooks@csail.mit.edu) is the Panasonic
Professor of Robotics in the MIT Computer Science and Artificial
Intelligence Laboratory, Cambridge, MA.