any other. One of the fundamental
ideas baked into the Internet protocols
is anonymity. This presents immense
problems for local networks that want
to be secure because they cannot easily validate whether requested connections are from authorized members.
What technologies are available to define secure subnets, abandoning the
idea of flatness and anonymity?
A: The Internet is flat in the sense
that the cost and time of communication between two points approximates
that of any two points chosen at random. Enterprise networks are often,
not to say usually, designed and intended to be as flat as possible.
It is time to abandon the flat network. Flat networks lower the cost of
attack against a network of systems or
applications—successfully attacking
a single node gains access to the network. Secure and trusted communication must now trump ease of any-to-any communication.
It is time for end-to-end encryptions
for all applications. Think TLS, VPNs,
VLANs and physically segmented networks. Encrypted pathways must reach
all the way to applications or services
and not stop at network perimeters or
operating systems. Software Defined
Networks put this within the budget of
most enterprises.
Q: Most file systems use the old Unix
convention of regulating access by the
read-write-execute bits. Why is this a
security problem and what would be a
better practice for controlling access?
A: It is not so much a question of the
controls provided by the file system but
the permissive default policy chosen by
management. It is a problem because
it makes us vulnerable to data leakage, system compromise, extortion,
ransomware, and sabotage. It places
convenience and openness ahead of
security and accountability. It reduces
the cost of attack to that of duping an
otherwise unprivileged user into clicking on a bait object.
It is time to abandon this convenient
but dangerously permissive default ac-
cess control rule of in favor of the more
restrictive “read/execute-only” or even
better, “Least privilege.” These rules
are more expensive to administer but
they are more effective; they raise the
cost of attack and shrink the popula-
tion of people who can do harm. Our
current strategies of convenience over
security and “ship low-quality early and
patch late” are proving to be not just
ineffective and inefficient, but danger-
ous. They are more expensive in main-
tenance and breaches than we could
ever have imagined.
Q: What about malware? When it gets
on your computer it can do all sorts of
harm such as stealing your personal
data or in the worst case ransomware.
What effective defenses are there
against these attacks?
A: The most efficient measures are
those that operate early, preventing the
malware from being installed and executed in the first place. This includes
familiar antivirus programs as well
as the restrictive access control rules
mentioned earlier. It may include explicitly permitting only intended code
to run (so-called “white listing”). It will
include process-to-process isolation,
which prevents malicious code from
spreading; isolation can be implemented at the operating system layer,
as in for example, Apple’s iOS, or failing that, by running the untrusted processes in separate hardware boxes. We
should not be running vulnerable applications such as email and browsing
on porous operating systems, such as
Windows and Linux, along with sensitive enterprise applications.
However, since prevention will never be much more than 80% effective,
we should also be monitoring for indicators of compromise, the evidence of
its presence that any code, malicious or
otherwise, must leave.
Oh, I almost forgot. We must monitor traffic flows. Malware generates
anomalous and unexpected traffic.
Automated logging and monitoring of
the origin and destination of all traffic
moves from ”nice to do” to ”must do.”
While effective logging generates large
quantities of data, there is software to
help in the efficient organization and
analysis of this data.
Q: Early in the development of operating systems we looked for solutions to
the problem of running untrusted software on our computers. The principle
of confinement was very important.
The idea was to execute the program in
a restricted memory where it could not
access any data other than that which
it asked for and which you approved.
The basic von Neumann architecture
did not have anything built in that
would allow confinement. The modern
operating systems like iOS or Android
include confinement functions called
“sandboxes” to protect users from untrusted software downloaded from the
Internet. Is this a productive direction
for OS designers and chip makers?
A: The brilliance of the von Neumann architecture was that it used the
same storage for both procedures and
data. While this was convenient and
efficient, it is at the root of many of
our current security problems. It permits procedures to be contaminated
by their data and by other procedures,
notably malware. Moreover, in a world
in which one can put two terabytes of
storage in one’s pocket for less than
$100, the problem that von Neumann
set out to solve—efficiently using storage—no longer exists.
In the modern world of ubiquitous
and sensitive applications running in
a single environment, with organized
criminals and hostile nation-states,
convenience and efficiency can no longer be allowed to trump security. It is
time to at least consider abandoning
the open and flexible von Neumann Architecture for closed application-only
operating environments, like Apple’s
iOS or the IBM iSeries, with strongly
typed objects and APIs, process-to-process isolation, and a trusted computing base (TCB) protected from other
processes. These changes must be
made in the architecture and operating
systems. There is nothing the iOS user
The most
efficient measures
are those that operate
early, preventing
the malware
from being installed
and executed in
the first place.