usually leads first to paralysis and then
to weak security, which no one complains about until there is a crisis.
Administrators want to prevent
obvious security breaches, and avoid
blame if something does go wrong.
Organizations want to manage their
risk sensibly, but because they don’t
know the important parameters they
can’t make good decisions or explain
their policies to users, and tend to oscillate between too much security and
too little. They don’t measure the cost
of the time users spend on security and
therefore don’t demand usable security. Vendors thus have no incentive
to supply it; a vendor’s main goal is to
avoid bad publicity.
Operationally, security is about
policy and isolation. Policy is the statement of what behavior is allowed: for
example, only particular users can
approve expense reports for their direct reports or only certain programs
should run. Isolation ensures the policy is always applied. Usability is pretty
bad for both.
Policy
Policy is what users and administrators
see and set. The main reason we don’t
have usable security is that users don’t
have a model of security they can understand. Without such a model, the
users’ view of security is just a matter of
learning which buttons to push in some
annoying dialog boxes, and it’s not surprising they don’t take it seriously and
can’t remember what to do. The most
common user model today is “Say OK
to any question about security.”
What do we want from a user model?
It has to be simple (with room for ˲
elaboration on demand).
It has to minimize hassle for the ˲
user, at least most of the time.
It has to be true (given some as- ˲
sumptions). It is just as real as the system’s code; terms like “user illusion”
make as much sense as saying that
bytes in RAM are an illusion over the
reality of electrons in silicon.
It does ˲ not have to reflect the implementation directly, although it
does have to map to things the code
can deal with.
An example of a successful user
model is the desktop, folders, and files
of today’s client operating systems.
Although there is no formal standard
for this model, it is clear enough that
users can easily move among PC, Macintosh, and Unix systems.
The standard technical model for security is the access control model shown
in the figure, in which isolation ensures
there is no way to get to objects except
through channels guarded by policy,
which decides what things agents (
principals) are allowed to do with objects
(resources). Authentication identifies
the principal, authorization protects
the resource, and auditing records what
happens; these are the gold standard for
security.
3 Recovery is not shown; it fixes
damaged data by some kind of undo,
such as restoring an old version.
In most systems the implementation follows this model closely, but it
is not very useful for ordinary people:
they take isolation for granted, and
they don’t think in terms of objects or
resources. We need models that are
good for users, and that can be com-
standard technical security access control model.
Authentication
Authorization
piled into access control policy on the
underlying objects.
A user model for security deals with
policy and history. It has a vocabulary of
objects and actions (nouns and verbs)
for talking about what happens. History is what did happen; it’s needed
for recovering from past problems
and learning how to prevent future
ones. Policy is what should happen, in
the form of some general rules plus
a few exceptions. The policy must be
small enough that you can easily look
at all of it.
Today, we have no adequate user
models for security and no clear idea
of how to get them. There’s not even
agreement on whether we can elicit
models from what users already know,
or need to invent and promote new
ones. It will take the combined efforts
of security experts, economists, and
cognitive scientists to make progress.
Here are a few tentative examples of
what might work.
You need to know who can do what
to which things. “Who” is a particular
person, a group of people like your Facebook friends, anyone with some attribute like “over 13,” or any program
with some attribute like “approved by
Microsoft IT.” “What” is an action like
read or write. “Which” is everything in
a particular place like your public folder, or everything labeled medical stuff
(implying that data can be labeled). An
administrator also needs declarative
policy like, “Any account’s owner can
transfer cash out.”
A time machine lets you recover
from damage to your data: you can see
what the state was at midnight on any
previous day. You can’t change the past,
but you can copy things from there to
the current state just as you can copy
things from a backup disk.
Agent/Principal
request
Guard
object/
resource
1. isolation boundary
2. Access Control
3. Policy
Policy
audit Log
isolation
Perfect isolation ensures that the only
way for an input to reach an object is
through a channel controlled by policy.
Isolation fails when an input has an effect that is not controlled by policy; this
is a bug. Some common bugs today
are buffer overruns, cross-site scripting, and SQL code injection. Executable inputs like machine instructions
or JavaScript are obviously dangerous,
but modern HTML is so complex and
expressive that there are many ways