so an attack that succeeds at one
replica is not guaranteed to succeed
at another. Other randomly selected
transformation would work, too.
Program rewriters are another
means for creating synthesized bases. The rewriter takes a software component C as its input, adds checks or
performs analysis, and outputs a version C′ that is robust against some
class of attacks, because C′ is incapable of certain behaviors. If we trust
the rewriter, then we have a basis for
enhanced trust in C′. But even if we
do not trust the rewriter, we could
still have a basis for enhanced trust in
that rewriter’s output by employing a
variant of proof-carrying code. 2 With
proof-carrying code, the rewriter also
outputs a proof that C′ is a correctly
modified version of C; certifiers for
such proofs can be simple and small
programs, independent of how large
and complicated the rewriter is.
An engineering discipline should provide means to analyze and construct
artifacts that will satisfy properties
of interest. This column proposed a
foundation for an engineering discipline to support the design of secure
systems. It suggests that system design be driven by desires to change the
assumptions that underlie trust. Security mechanisms change where we
must place our trust; analysis allows
us to weaken assumptions.
1. Kocher, et al. Spectre attacks: Exploiting speculative
2. Necula, G.C. Proof-carrying code. In Proceedings of the
24th ACM SIGPLAN-SIGAC T Symposium on Principles
of Programming (Paris, France), 1997, 106–119.
3. Resilient Military Systems and the Advanced Cyber
Threat. Defense Science Board Task Force Report
Fred B. Schneider ( email@example.com) is Samuel B.
Eckert Professor of Computer Science and chair of the at
Cornell University computer science department, Cornell
Supported in part by AFOSR grant F9550-16-0250
and NSF grant 1642120. The views and conclusions
contained herein are those of the author and should not be
interpreted as necessarily representing the official policies
or endorsements, either expressed or implied, of these
organizations or the U.S. government.
Schneider is grateful to Susanne Hambrusch, John King,
Carl Landwehr, Steve Lipner, and Greg Morrisett for their
very helpful feedback on early drafts of this column.
Copyright held by author.
set of the inputs to certain interfaces
would be checked. An assumption is
thus being introduced—that the right
set of inputs is being checked.
˲ Formal Verification. Software is
amenable to logical analysis, either
manual or automated. Today’s state of
the art for automated analysis allows
certain simple properties to be checked
automatically for large components
and allows rich classes of properties to
be verified by hand for small components. Research in formal verification
has made steady progress on widening the class of properties that can be
checked automatically, as well as on
increasing the size and complexity that
can be handled.
An analytic basis for trust can be conveyed to some consumer by sharing
the method and the results the method
produced. When testing is employed,
the artifact, set of test cases, and expected outputs are shared. For undertaking other forms of automated analysis, we would employ an analyzer that
not only outputs a conclusion (“
program type checked”) but also generates
and provides a transcript of the inferences that led to this conclusion—in
effect, the analyzer produces a proof
of the conclusion for the given artifact. Proof checking is, by definition,
a linear-time process in the size of the
proof, and proof checkers are far simpler programs than proof generators
(that is, analyzers). So, without duplicating work, a consumer can check the
soundness of a manually or automatically produced proof.
Synthesized Basis for Trust. Trust
in the whole here derives from the way
its components are combined—a form
of divide and conquer, perhaps involving trust in certain of the components
or in the glue used to combine them.
Most of the mechanisms studied in a
computer security class are intended
for supporting a synthesized basis of
trust. OS kernels and hypervisors enforce isolation, reference monitors and
firewalls restrict the set of requests a
component will receive, ignorance of a
secret can impose unreasonable costs
on an outsider attempting to perform
With synthesized bases for trust,
we place trust in some security mech-
anisms. These mechanisms ensure
some component executes in a more-
benign setting, so the component can
be designed to operate in an environ-
ment characterized by stronger as-
sumptions than we are prepared to
make about the environment in which
the synthesis (mechanism plus compo-
nent) is deployed.
Assumptions about independence
are sometimes involved in establishing a synthesized basis for trust.
With defense in depth, we aspire for a
combination of defenses to be more
secure than any of its elements. Two-factor authentication for withdrawing cash at an ATM is an example; a
bankcard (something you have) and a
PIN (something you know) both must
be presented, so stealing a wallet containing the card alone does not benefit the attacker.a Defense in depth
improves security to the extent that its
elements do not share vulnerabilities.
So an independence assumption is involved—we make an assumption that
success in attacking one element does
not increase the chances of success in
Independence does not hold in
replicated systems if each replica
runs on the same hardware and executes the same software; the replicas
all will have the same vulnerabilities
and thus be subject to the same attacks. However, we can create some
measure of independence across
replicas by using address space layout randomization, which causes
different replicas of the software to
employ different memory layouts,
a Kidnapping the person usually gets the wallet,
too. So the two mechanisms here have a vulnerability in common.
to analyze and
that will satisfy
properties of interest.