to trick a browser into running code,
and widely used programs with simple
inputs like JPEG have had buffer overruns. A modern client OS, together
with the many applications that run on
it, is bound to have security bugs.
Users can’t evaluate these dangers.
The only sure way to avoid the effects
of dangerous inputs is to reject them.
A computer that is not connected to
any network rejects all inputs, and is
probably secure enough for most purposes. Unfortunately, it isn’t very useful. A more plausible approach has two
components:
Divide inputs into safe ones, han- ˲
dled by software that you trust to be
bug-free (that is, to enforce security
policy), and dangerous ones, for which
you lack such confidence. Vanilla ANSI
text files are probably safe and unfiltered HTML is dangerous; cases in between require judgments that balance
risk against inconvenience.
Accept dangerous inputs only from ˲
sources that are accountable enough,
that is, that can be punished if they
misbehave. Then if the input turns out
to be harmful, you can take appropriate revenge on its source.
accountability
People think that security in the real
world is based on locks. In fact, real-world security depends mainly on deterrence, and hence on the possibility
of punishment. The reason your house
is not burgled is not that the burglar
can’t get through the lock on the front
door; rather, it’s that the chance of
getting caught and sent to jail, while
small, is large enough to make burglary
uneconomic.
It is difficult to deter attacks on a
computer connected to the Internet because it is difficult to find the bad guys.
The way to fix this is to communicate
only with parties that are accountable,
that you can punish. There are many different punishments: money fines, ostracism from some community, firing, jail,
and other options. Often it is enough if
you can undo an action; this is the financial system’s main tool for security.
Some punishments require identifying the responsible party in the physical
world, but others do not. For example,
to deter spam, reject email unless it is
signed by someone you know or comes
with “optional postage” in the form
the most common
user model
today is “say oK
to any question
about security.”
of a link certified by a third party you
trust, such as Amazon or the U.S. Postal
Service; if you click the link, the sender
contributes a dollar to a charity.
The choice of safe inputs and the
choice of accountable sources are both
made by your system, not by any centralized authority. These choices will
often depend on information from
third parties about identity, reputation,
and so forth, but which parties to trust
is also your choice. All trust is local.
To be practical, accountability needs
an ecosystem that makes it easy for
senders to become accountable and for
receivers to demand it. If there are just
two parties they can get to know each
other in person and exchange signing
keys. Because this doesn’t scale, we
also need third parties that can certify
identities or attributes, as they do today for cryptographic keys. This need
not hurt anonymity unduly, since the
third parties can preserve it except
when there is trouble, or accept bonds
posted in anonymous cash.
This scheme is a form of access control: you accept input from me only if
I am accountable. There is a big practical difference, though, because accountability is for punishment or undo.
Auditing is crucial, to establish a chain
of evidence, but very permissive access
control is OK because you can deal with
misbehavior after the fact rather than
preventing it up front.
freedom
The obvious problem with accountability is that you often want to communicate with parties you don’t know
much about, such as unknown vendors or gambling sites. To reconcile
accountability with the freedom to go
anywhere on the Internet, you need
two (or more) separate machines: a
green machine that demands accountability, and a red one that does not.
On the green machine you keep important things, such as personal, family and work data, backup files, and so
forth. It needs automated management
to handle the details of accountability for software and Web sites, but you
choose the manager and decide how
high to set the bar: like your house, or
like a bank vault. Of course the green
machine is not perfectly secure—no
practical machine can be—but it is far
more secure than what you have today.
On the red machine you live wild and
free. You don’t put anything there that
you really care about keeping secret or
really don’t want to lose. If anything
goes wrong, you reset the red machine
to some known state.
This scheme has significant unsolved problems. Virtual machines can
keep green isolated from red, though
there are details to work out. However,
we don’t know how to give the user
some control over the flow of information between green and red without
losing too much security.
conclusion
Things are so bad for usable security
that we need to give up on perfection
and focus on essentials. The root cause
of the problem is economics: we don’t
know the costs either of getting security or of not having it, so users quite
rationally don’t care much about it.
Therefore, vendors have no incentive
to make security usable.
To fix this we need to measure the
cost of security, and especially the
time users spend on it. We need simple models of security that users can
understand. To make systems trustworthy we need accountability, and
to preserve freedom we need separate
green and red machines that protect
things you really care about from the
wild Internet.
References
1. adams, a. and Sasse, a. Users are not the enemy.
Commun. ACM 42, 12 (Dec. 1999), 41–46.
2. anderson, r. economics and Security resource Page;
http://www.cl.cam.ac.uk/~rja14/ econsec.html
3. Lampson, b. Practical principles for computer security.
in Software System Reliability and Security, broy et
al., eds., ioS Press, 2007, 151–195.
Butler Lampson ( butler.Lampson@microsoft.com) is a
technical fellow at Microsoft research and is an aCM
fellow.