The passage of time is essential to ensuring
the repeatability and predictability of software
and networks in cyber-physical systems.
BY eDWaRD a. Lee
moSt microproceSSorS Are
embedded in systems
that are not first-and-foremost computers. Rather,
these systems are cars, medical devices, instruments,
communication systems, industrial robots, toys,
and games. Key to them is that they interact with
physical processes through sensors and actuators.
However, they increasingly resemble general-purpose
computers, becoming networked and intelligent, often
at the cost of dependability.
Even general-purpose computers are increasingly
asked to interact with physical processes. They
integrate media (such as video and audio), and through
their migration to handheld platforms and pervasive
computing systems, sense physical dynamics and
control physical devices. They don’t always do it well.
The technological basis that engineers and computer
scientists have chosen for general-purpose computing
and networking does not support these applications
well. Changes that ensure this support could improve
them and enable many others.
The foundations of computing, rooted in Turing,
Church, and von Neumann, are about the
transformation of data, not physical dynamics. Computer scientists must rethink the core abstractions if they truly
want to integrate computing with physical processes. That’s why I focus here
on a key aspect of physical processes—
the passage of time—that is almost entirely absent in computing. This is not
just about real-time systems, which accept the foundations and retrofit them
with temporal properties. Although that
technology has much to contribute to
systems involving physical processes, it
cannot solve the problem of computers
functioning in the physical world alone
because it is built on flawed technological foundations.
Many readers might object here.
Computers are so fast that surely the
passage of time in most physical processes is so slow it can be handled
without special accommodation. But
modern techniques (such as instruction scheduling, memory hierarchies,
garbage collection, multitasking, and
reusable component libraries that
do not expose temporal properties in
their interfaces) introduce enormous
variability and unpredictability into
computer-supported physical systems. These innovations are built on
a key premise: that time is irrelevant
to correctness and is at most a measure of quality. Faster is better, if you
are willing to pay the price in terms
of power consumption and hardware.
By contrast, what these systems need
is not faster computing but physical
actions taken at the right time. Timeliness is a semantic property, not a
But surely the “right time” is expecting too much, you might say. The
physical world is neither precise nor
reliable, so why should we demand
such properties from computing systems? Instead, these systems must be
robust and adaptive, performing reliably, despite being built out of unreliable components. While I agree that
systems must be designed to be robust,
we should not blithely discard the reliability we have. Electronics technology
is astonishingly precise and reliable,