features. Attackers are seldom able
to make clean penetrations that leave
no traces, and when they insert their
own malware, they often mess up.
Stuxnet—a virus that damaged Iranian nuclear centrifuges—is a famous
example. Although attribution has
been difficult, security experts have
placed a strong likelihood that Stuxnet
was a collaboration between the U.S.
and Israel, based on the style of coding,
similarity to other programs, and the
variable names used. And, of course,
the creators of Stuxnet did slip up fairly
substantially in that it escaped into the
wild from the Iranian facilities.
Q: When I grew up, operating systems
were much smaller and more cleanly
organized. Some early operating systems were under 50-thousand lines
of code. Today’s major operating
systems are closing in on 100-mil-
lion lines, and one of the open source
Linux distributions is near 500 million. None of those systems has been
formally verified. They are cited as
premier examples of spaghetti code.
And yet today’s major operating systems are amazingly reliable compared to the old. How do you explain
the rise of reliability along with the
rise of complexity?
A: Much of the progress is due to the
superabundance of storage space and
cycles. This enables us to tolerate the
bloat induced by patches and repairs,
most of which are to the mass of software outside the operating system kernel. The accumulation of patches does
generally make systems more reliable.
Further, designers now devote a lot of
resources to programs that monitor
other programs and they test far more
exhaustively than before. Even though
there are strange states you can push
systems into—which is what many
hostile exploits do—those states tend
not to occur in the situations that matter to regular users most of the time.
Many of the prescriptions of software engineering are violated routinely. For example, we know how to
eliminate the continuing vulnerability to buffer overruns—but we have
not done so. Still, progress has been
substantial. Disciplined coding practices and isolation techniques such as
sandboxing have been major factors
bioterrorism does not differ much
from protection against natural pandemics. Similarly, restoration of computer networks is similar whether
they are brought down by a geomagnetic storm, an electro-magnetic
pulse from a nuclear explosion in
space, or a cyber attack.
Q: But what about non-government
organizations? What should they do?
A: A business or educational institution should worry primarily about
the mundane attacks that affect its
operations. This effort is of general
value because protection against the
mundane also reduces exposure to
massive attacks in the Internet.
Standard measures such as antivirus software, firewalls, two-factor
authentication, security training, and
basic security practices are what regular enterprises should concentrate on.
All organizations should make it a
priority to protect their data through
regular, hard-to-corrupt backups. The
ability to restore data is an essential
part of resilience and recovery.
Enterprises can further increase
their resiliency by participating in backup communication networks, including
even amateur (ham) radio. And I could
go on to list more steps of similar nature, all helpful in securing cyberspace.
Q: All the measures you have cited
are standard ones. They have been
advocated by security experts for decades. Why has your ACM Ubiquity
essay (see https://bit.ly/2G5b76S)
A: The utter familiarity of this advice is a key part of my argument.
We have known for decades of these
methods for improving cybersecurity.
They are taught widely in courses and
discussed in books. They are not secret. Yet most of the damaging cyber
attacks we have suffered could have
been prevented by implementing
So the big questions are: Why were
those steps not taken, and what has
been the result? My (controversial)
answer is that cybersecurity has sim-
ply not been very important. The “Cy-
ber Pearl Harbor” scenarios are seen
as far removed from day-to-day op-
erations of civilian enterprises. What
they have to deal with is regular crime
and regular mistakes, similar to what
they have always faced in the physical
realm. There have been a few head-
line-grabbing cyber attacks involv-
ing theft of personal identification
information from firms with large
databases. These are a small percent-
age of all cyber attacks. These events
illustrate my point. The companies
involved did not consider the risk of
massive theft to be important enough
to invest in strong security measures.
They now see that they were wrong.
We have an online ecosystem in
which crime is being kept within
bounds by countermeasures by enter-
prises and law enforcement agencies.
In almost all cases, criminals aim to
steal data or money without divulging
their identities or destroying systems.
As the economy and society at large
increase their dependence on information technologies, crime is migrating into cyberspace. As a result,
more resources are being put into
cybersecurity. This is happening at a
measured pace without drastic reengineering of our systems.
Q: Security experts have said that
much software code is a mess of
“spaghetti” that cannot be verified as
correct. There is a dark industry that
painstakingly searches through the
tangled codes and sells its findings
as “zero day exploits” on the black
market. Purchasers of these exploits
are able to launch surprise attacks
and inflict serious damage before the
victims are able to defend themselves
with new patches. On what basis have
you concluded that “spaghetti code”
is not a great risk?
A: I have not concluded that at all.
“Spaghetti code” is a risk, and is indeed continually being exploited by
attackers. What I point out is that
“spaghetti code” also has positive
More than anything,
must concentrate on