cal processes are the most successful examples. 1 Newtonian models of
planetary motion give highly reliable
predictions of the future positions of
planets, asteroids, comets, and man-made vehicles. Jay Forrester’s system
dynamics models were very reliable
for material and information flows
in industrial plants. Queueing network models have been very reliable
for forecasting throughputs and response times of communication networks and assembly lines. Finite element models have been very reliable
for determining whether airplanes
will fly or buildings will withstand
The common feature of these physical models is that they describe and
exploit natural recurrences—laws of
nature. We can assume that Newtonian physics, system feedback loops,
congestion at bottleneck queues, and
forces in rigid structures will continue
to behave the same way in the future.
We do not have to worry that the assumptions of the model will be invalid.
Our problems with forecasts arise
when we wrongly believe model assumptions or parameter forecasts will
be valid. In other words, we assume a
recurrence that will not happen.
Many things can invalidate our assumptions of recurrences: human
declarations in social systems, chaotic,
or low-probably disruptive events, inherently complex systems whose rules
of operation are unknown, complex
adaptive systems whose rules change,
environmental changes that invalidate key assumptions, and unanticipated interactions especially those
never before seen. This list is hardly
Of these, I think the first is the most
underappreciated. Human social systems are networks of commitments,
and most commitments ultimately
follow from human declarations. The
timing and nature of declarations is
unpredictable. Whether a technology
is adopted or sustained in a community depends on the support of its social structure and belief systems, both
of which resulted from previous declarations. 3 Seely Brown and Duguid,
mentioned earlier, give numerous examples of technology forecasts foiled
by human declarations.
We know from experience that
We seek technology
an attempt to reduce
our risks, losses,
We do so against
many validated models deteriorate
over time. A locality principle is at
work: the model assumptions are less
likely to change over a short period
than over a long period. Our short-range predictions are better than our
long-range predictions. As a consequence, we need to frequently revalidate models to maintain our confidence that they still apply to at least
the current circumstances.
What about long-term predictions?
Most often, they are just flat-out wrong,
as in the examples Dan Gardner and
Dave Walter gave us. Occasionally they
are correct but way off in the timing.
Researchers at MIT predicted in the
1960s that computer utilities—
forerunners of today’s “cloud”—would be
common by the 1980s; they were off
by 30 years. Alan Kay predicted in the
1970s that personal computers would
revolutionize computing; he was off by
20 years. Alan Turing in 1950 speculated that conversation machines would,
by the year 2000, have a 70% chance
of fooling a human for more than five
minutes. 4 He also thought that memory capacity for the machine’s database
would be the main obstacle. By 2012,
our natural language systems are not
close to this goal even though we have
the memory capacity—but maybe in a
few more years they will.
The few long-range predictions that
do succeed late give us a forlorn hope
that we can at least get the outcome
right, even if the timing is off.
Nevertheless, the dream of good
prediction by machine lives on. That
Scientific American article mentioned
earlier envisions a project to build a
computing system with more stor-
age and computing power than ever
before, connected globally to sensors
and personal information. With new
data mining methods to be developed,
the system would find correlations in
the data, and use them for predictions.
Despite the soaring rhetoric, the sys-
tem is no more likely to be successful
than any other prediction machine,
except when it can find and validate re-
currences. It is unlikely to be success-
ful whenever the outcome can depend
on human declarations or unpredict-
We seek technology predictions in an
attempt to reduce our risks, losses,
and missed opportunities. We do so
against great odds. Unpredictability
arises not from insufficient information about a system’s operating laws,
from inadequate processing power,
or from limited storage. It arises because the system’s outcomes depend
on unpredictable events and human
declarations. Do not be fooled into
thinking that wise experts or powerful
machines can overcome such odds.
If you are called on to make forecasts, do so with great humility. Make
sure your models are validated and
that their assumed recurrences fit the
world you are forecasting. Ground
your speculations in observable data,
or else label them as opinion. Be skeptical about your ability to make longer-term predictions, even with the best of
models. Do not worry about the forecasts made by experts—they are no
better than forecasts you can make.
Often, the most powerful and useful
statement you can make when asked
for a prediction is: “I don’t know.”
1. Denning, p. Modeling reality. American Scientist 78
(nov.–Dec. 1990); http://denninginstitute.com/pjd/
2. Denning, p. Innovating the future: from ideas to
adoption. The Futurist, World future society (Jan.–
feb, 2012), 40–45.
3. schon, D. Beyond the Stable State. norton, 1971.
4. turing, a. M. Computing machinery and intelligence.
Mind 59 (1950), 433–460; http://www.loebner.net/
Peter J. Denning ( email@example.com) is Distinguished
professor of Computer science and Director of the
Cebrowski Institute for information innovation at the
naval postgraduate school in Monterey, Ca, is editor of
aCM Ubiquity, and is a past president of aCM.
Copyright held by author.