Nontechnical factors impacting
cybersecurity reflect deep political,
social, and economic divisions within
our society. These problems include
shortened development cycles; the
inability to attract and retain the best
workers; and the general failure of our
schools at early science, technology,
engineering, and math (STEM) education. While it is certainly possible that
the need to secure our computers will
force us to find solution to these other
problems, such Pollyannaish hopes
seem unlikely to be realized.
In recent years there has been an effort to liken cybersecurity to a public
health problem. Just as hand washing
and coughing on our sleeves can help
halt the spread of influenza, advocates
say, good “cyber hygiene” such as running up-to-date anti-virus software
and only going to clean Web sites run
by reputable organizations can help
stop the spread of malware and the
growth of malicious botnets.
A more accurate public health metaphor might be obesity. Just as there
are companies in the U.S. that benefit
from the production and consumption of excess calories, while others
make money treating the medical conditions that result, there are companies in the U.S. that benefit from poor
security practices, while others are
benefiting from mitigating the resulting problems.
Preventing security snafus is difficult and frequently thankless. It is
commonly reported that chief security
officers are denied resources by management because they cannot quantify the risk their organizations face
or how the requested expenditures
will improve the security posture.
We would like to demonstrate that
it frequently feels like
many organizations
are implicitly relying
on hackers for their
security testing.
security has a significant return-on-investment, but security is frequently
just a cost. Chief security officers that
deploy technology are sometimes criticized for wasting money when new
systems are purchased and no attack
materializes. For senior managers, the
risk to one’s career of being innovative
is frequently higher than the risk of
maintaining the same poor practices
of one’s peers.
The isolation fallacy
One of the simplest solutions proposed
for the cybersecurity problem is to run
systems in secure enclaves that are disconnected from the Internet. While the
idea may sound attractive, execution is
impossible in practice.
Even a so-called “stand-alone computer” has a bidirectional connection
to the Internet. All of the software on
these machines is typically downloaded from the Internet (or from media
created on machines that were connected to the Internet). The documents
produced on stand-alone machines are
either burned to DVD or printed—
after which they are often scanned and
sent by email or fax to their final destination. A completely isolated system
would have very limited utility.
Just as all computers are connected,
so too are all humans connected to
computers. Human activities as disparate as genetic engineering and subsistence farming rely on computers and
communications systems to manipulate data and send it vast distances. An
attacker can cause a lot of damage by
modifying a message, no matter if the
data is a genetic code or a coded SMS
message. Millions of people live downstream from dams with floodgates that
are controlled by computers.
Over the past 30 years security researchers have developed a toolbox
of techniques for mitigating many
kinds of cyber attacks. Those techniques include workable public key
cryptography (RSA with certificates to
distribute public keys); fast symmetric cryptography (AES); fast public key
cryptography (elliptic curves); easy-to-use cryptography (SSL/TLS); sandboxing (Java, C#, and virtualization);
firewalls; BAN logic; and fuzzing.
These breakthroughs have resulted in
countless papers, successful tenure
cases, billions of dollars in created
Most companies see
information security
as a cost or a product
rather than as an
enabling technology.
wealth, and several Turing awards. In
spite of all this progress, cyberspace is
not secure.
Some have argued that because
today’s cyber infrastructure was designed without attention to security,
the proper solution is redesign. Such
proposals, when realized, frequently
result in systems having the same
kinds of problems we experience today. For example, some have proposed
adding a kind of “authentication
layer” to the Internet.
3 Such a layer
would increase the value of stolen
credentials, proxy-based attacks, and
implanted malware—problems that
already bedevil today’s authentication
layers.
We frequently discover that what
was once regarded as a breakthrough
security technology is really nothing
more than an incremental advance.
For example, considerable effort was
expended over the past decade to deploy non-executable stacks and address space layout randomization on
consumer operating systems. As a result, Microsoft’s 64-bit Windows 7 is
not vulnerable to much of the malware
that can infect 32-bit Windows XP systems. Yet even without an executable
stack, Windows 7 applications can still
fall victim to so-called “return-oriented
programming”
5 in which the attacker’s
malicious code is created from the exploited program and a series of specially constructed stack frames, each
frame executing a few instructions in
the program before “returning” to the
next sequence.
The fault is Both in our
Bytes, and in our selves
While it is tempting to focus on technical factors impacting the cybersecurity
problem, I believe nontechnical fac-