complex physical systems, that is, the
controlled process shown in the figure
in this column.
All of this leads to the conclusion that
the most effective approach to dealing with safety of computer-controlled
systems is to focus on creating the safe-ty-related requirements. System and
software requirements development
are necessarily a system engineering
problem, not a software engineering
problem. The solution is definitely not
in building a software architecture (
design) and generating the requirements
later, as has been surprisingly suggested by some computer scientists. 7
Some features of a potential solution
can be described. It will likely involve using a model or definition of the system.
Standard physical or logical connection
models will not help. For most such
models, analysis can identify only component failures. In some, it might be
possible to identify component failures
leading to hazards, but this is the easy
part of the problem and omits software
and humans. Also, to be most effective, the model should include controllers that are humans and organizations
along with social controls. Most interesting systems today are sociotechnical.
Using a functional control model,
analysis tools can be developed to analyze the safety of complex systems. Information on an approach that is being
used successfully on the most complex
systems being developed today can
be found in Engineering a Safer World1
and on the related website http://psas.
1. Leveson, N.G. Engineering a Safer World. MIT Press, 2012.
2. Leveson, N.G. Safeware: System Safety and
Computers. Addison-Wesley, 1995.
3. Leveson, N.G. The role of software in spacecraft
accidents. AIAA Journal of Spacecraft and Rockets
41, 4 (July 2004).
4. Leveson, N.G. and Thomas, J. P. STPA Handbook
5. Leveson, N.G. et al. Requirements specification for
process-control systems. IEEE Transactions on
Software Engineering SE- 20, 9 (Sept. 1994).
6. Lutz, R. Analyzing software requirements errors in
safety-critical, embedded systems. In Proceedings
of the International Conference on Software
Requirements. IEEE (Jan. 1992).
7. National Research Council. Software for Dependable
Nancy Leveson ( firstname.lastname@example.org) is a professor of
Aeronautics and Astronautics at the Massachusetts
Institute of Technology (MIT), Cambridge, MA, USA.
Copyright held by author.
inputs to a software system includes
both valid and invalid inputs, potential
time validity of inputs (an input may be
valid at a certain time but not at other
times), and all the possible sequences
of inputs when the design includes history (which is almost all software). This
domain is too large to cover any but a
very small fraction of the possible inputs in a realistic timeframe.
˲ System states: Like the number of
potential inputs, the number of states
in these systems is enormous. For example, TCAS—an aircraft collision
avoidance system—was estimated to
have 1040 possible states. 5 Note that collision avoidance is only one small part
of the automation that will be required
to implement autonomous (and even
˲ Coverage of the software design:
Taking a simple measure of coverage
like “all the paths through the software
have been executed at least once during testing” involves enormous and impractical amounts of testing time and
does not even guarantee correctness,
let alone safety.
˲ Execution environments: In addition to the problems listed so far, the
execution environment becomes significant when the software outputs are related to real-world states (the controlled
process and its environment) that may
change frequently, such as weather,
temperature, altitude, pressure, and so
on. The environment includes the social
policies under which the system is used.
In addition, as seen in the much-repeated Dijkstra quote, testing can
show only the presence of errors, not
Finally, and perhaps most important, even if we could exhaustively test
the software, virtually all accidents involving software stem from unsafe requirements. 2, 6 Testing can show only
the consistency of the software with
the requirements, not whether the requirements are flawed. While testing
is important for any system, including
software, it cannot be used as a measure or validation of acceptable safety.
Moving this consistency analysis to a
higher level (validation) only shifts the
problem, but does not solve it.
Simulation: All simulation depends
on assumptions about the environment in which the system will execute.
Autonomous cars have now been sub-
jected to billions of cases in simula-
tors, and have still been involved in ac-
cidents as soon as they are used on real
roads. The problems described for test-
ing apply here, but the larger problem is
that accidents occur when the assump-
tions used in development and in the
simulation do not hold. Another way of
saying this is that accidents occur be-
cause of what engineers call “unknown
unknowns” in engineering design. We
have no way to determine what the un-
known unknowns are. Therefore, simu-
lation can show only that we have han-
dled the things we thought of, not the
ones we did not think about, assumed
were impossible, or unintentionally left
out of the simulation environment.
Formal verification: Virtually all ac-
cidents involving software stem from
unsafe requirements, not implemen-
tation errors. Of course, it is possible
that errors in the implementation of
safe requirements could lead to an ac-
cident; however, in the hundreds of
software-related accidents I have seen
over 40 years, none have involved er-
roneous implementation of correct,
complete, and safe requirements.
When I look at accidents where it is
claimed the implemented software
logic has led to the loss, I always find
the software logic flaws stem from a
lack of adequate requirements. Of the
three examples shown in the sidebar in
this column, none of these accidents
(nor the hundreds of others that I have
seen) would have been prevented using
formal verification methods. Formal
verification (or even formal valida-
tion) can show only the consistency of
two formal models. Complete discrete
mathematical models do not exist of
System and software
necessarily a system
not a software