novices tend to develop declarative explanations of software that are driven
by events, and don’t seem to develop
notions of objects at all. The “
Commonsense Computing” group has shown us
that novices can create algorithms for
a variety of problems, although that
doesn’t really tell us how they think
software and software development
works in the world around them.
We are now in the same position
as educators in physics (or biology,
chemistry, or other sciences). Students have theories about how Wii
controllers, voicemail menu systems
driven by spoken voice commands,
touch screens, and Google and Bing
search work. If these novice theories
“mutilate” their minds, then it’s done,
it’s happened to everyone, and we’d
best just get on dealing with it. There
is no chance to place a theory in their
minds before they learn anything else.
We have to start from where the students are, and help them develop better theories that are more consistent
and more correct. There is no first, but
we can influence next.
“again: the one sure
way to advance
January 13, 2011
Once again, bad software has struck.
From 7: 30 a.m. to late afternoon on November 10, 2010, Internet access and
email were unavailable to most customers of Swisscom, the main mobile
services provider in Switzerland. Given
how wired our lives have become, such
outages can have devastating consequences. As an example, customers of
some of the largest banks in Switzerland cannot access their accounts on-line unless they type in an access code,
one-time-pad style, sent to their cellphone when they log in.
That is all the news we will see:
Something really bad happened, and it
was due to a software bug. A headline
for a day or two, then nothing. What
we will miss in this case as with almost
all software disasters—most recently,
the Great Pre-Christmas Skype Outage of 2010—is the analysis: what went
wrong, why it went wrong, and what
is being done to ensure it does not go
“in Rahm emanuel’s
‘you never want
a serious crisis
to go to waste.’ ”
wrong again. Systematically applying
such analysis is the most realistic technique available today for breakthrough
improvements in software quality. The
IT industry is stubbornly ignoring it. It
is our responsibility as software engineering professionals to change that
I have harped on this theme before1, 2, 3 and will continue to do so until
the attitude changes. Quoting from the
Airplanes today are incomparably
safer than 20, 30, 50 years ago: 0.05
deaths per billion kilometers. That’s not
Rather, it’s by accidents.
What has turned air travel from a game
of chance into one of the safest modes of
traveling is the relentless study of crashes
and other mishaps. In the U. S. the National Transportation Safety Board has investigated more than 110,000 accidents
since it began its operations in 1967. Any
accident must, by law, be analyzed thoroughly; airplanes themselves carry the famous “black boxes” whose only purpose is
to provide evidence in the case of a catastrophe. It is through this systematic and
obligatory process of dissecting unsafe
flights that the industry has made almost
all flights safe.
Now consider software. No week
passes without the announcement of
some debacle due to “computers”—
meaning, in most cases, bad software.
The indispensable Risks Digest Fo-
rum4 and many pages around the Web
collect software errors; several books
have been devoted to the topic. A few
accidents have been investigated thor-
oughly; two examples are Nancy Leve-
son’s milestone study of the Therac- 25
patient-killing medical device2, and
Gilles Kahn’s analysis of the Ariane 5
crash, which Jean-Marc Jézéquel and
I used as a basis for our 1997 article6.
Both studies improved our understand-
ing of software engineering, but these
are exceptions. Most of what we have
elsewhere is made of hearsay and par-
tial information, and plain urban leg-
ends—like the endlessly repeated story
about the Venus probe that supposedly
failed because a period was typed in-
stead of a comma, most likely a canard.
1. bertrand Meyer, the one sure way to advance
software engineering, http://bertrandmeyer.
software-engineering/, august 21, 2009.
2. bertrand Meyer, dwelling on the point, http://
point/, november 29, 2009.
3. bertrand Meyer, analyzing a software failure, http://
software-failure/, May 24, 2010.
4. Peter g. neumann (moderator), the risks digest
forum on risks to the Public in Computers and
related systems, http://catless.ncl.ac.uk/risks/.
5. nancy leveson, Safeware: System Safety and
Computers, addison-wesley, 1995.
6. jean-Marc jézéquel and bertrand Meyer: design
by contract: the lessons of ariane, Computer 30, 1,
Mark Guzdial is a professor at the georgia Institute of
technology. Bertrand Meyer is a professor at eth zurich and
ItMo (st. Petersburg) and chief architect of eiffel software.