terrence depends on being able to attribute acts to individuals or institutions
and then punish the offenders.
˲Attribution of attacks delivered
over a network is difficult, because
packets are relayed through multiple
intermediaries and, therefore, purported sources can be spoofed or rewritten along the way. Attribution thus
requires time-consuming analysis of
information beyond what might be
available from network traffic.
˲ Punishment can be problematic
because attackers can work outside
the jurisdiction of the government
where their target is located. To limit
or monitor all traffic that is destined
to the hosts within some government’s jurisdiction can interfere with
societal values such as openness and
access to information. Such monitoring also is infeasible, given today’s network architecture.
The time is ripe to be having discussions about investment and government interventions in support of cybersecurity. How much should we invest?
And how should we resolve trade-offs
that arise between security and (other)
societal values? It will have to be national dialogue. Whether or not computer scientists lead, they need to be
involved. And just as there is unlikely
to be a single magic-bullet technology
for making systems secure, there is unlikely to be a magic-bullet intervention
to foster the needed investments.
1. Vardi, M. Cyber insecurity and cyber libertarianism.
Commun. ACM 60, 5 (May 2017), 5.
Fred B. Schneider ( firstname.lastname@example.org) Fred B. Schneider
is Samuel B. Eckert Professor of Computer Science
and chair of the at Cornell University computer science
department, Cornell University, USA.
The impetus for this Viewpoint was a series of discussions
with Communications Senior Editor Moshe Vardi during
the two years preceding his May 2017 Communications
Editor’s Letter. Susan Landau, Lyn Millett, and
Deirdre Mulligan read an earlier version of this Viewpoint
and provided helpful and timely feedback. I am also
grateful to the two reviewers for their comments, which
resulted in this Viewpoint having a better-defined focus.
The author’s work has been supported in part by AFOSR
grant F9550-16-0250 and NSF grant 1642120. The views
and conclusions contained in this Viewpoint are those
of the author and should not be interpreted as necessarily
representing the official policies or endorsements,
either expressed or implied, of these organizations
or the U. S. government.
Copyright held by author.
risdiction of any one government necessarily has a limited geographic scope.
So government interventions designed
to achieve goals in some geographic
region (where that government has
jurisdiction) must also accommodate
the diversity in goals and enforcement
mechanisms found in other regions.
Flawed Analogies Lead
to Flawed Interventions
Long before there were computers, liability lawsuits served to incentivize the
delivery of products and services that
would perform as expected. Insurance
was available to limit the insured’s
costs of (certain) harms, where the formulation and promulgation of standards facilitated decisions by insurers
about eligibility for coverage. Finally,
people and institutions were discouraged from malicious acts because their
bad behavior would likely be detected
Computers and software comprise
a class of products and services, attackers are people and institutions.
So it is tempting to expect that liability, insurance, and deterrence would
suffice to incentivize investments to
Liability. Rulings about liability for
an artifact or service involve compari-
sons of observed performance with
some understood basis for acceptable
behaviors. That comparison is not
possible today for software security,
since software rarely comes with full
specifications of what it should and
should not do. Software developers
and service providers shun provid-
ing detailed system specifications be-
cause specifications are expensive to
create and could become an impedi-
ment to making changes to support
deployment in new settings and to
support new functionality. Having a
single list that characterizes accept-
able behavior for broad classes of
systems (for example, operating sys-
tems or mail clients) also turns out to
be problematic. First, by its nature,
such a list could not rule out attacks
to compromise a property that is spe-
cific only to some element in the class.
Second, to the extent that such a list
rules out repurposing functionality
(and thereby blocks certain attacks),
the list would limit opportunities for
innovations (which often are imple-
mented by repurposing functionality).
Insurance. Insurance depends for
pricing on the use of data about past
incidents and payouts to predict fu-
ture payouts. But there is no reason
to believe that past attacks and com-
promises to computing systems are
a good predictor of future attacks or
compromises. I would hope succes-
sive versions of a given software com-
ponent will be more robust, but that
is not guaranteed. For example, new
system versions often are developed
to add features, and a version that
adds features might well have more
vulnerabilities than its predecessor.
Moreover, software deployed in a large
network is running in an environment
that is likely to be changing. These
changes—which might not be under
the control of the developer, the user,
the agent issuing insurance, or even
any given national government—
might facilitate attacks, and that fur-
ther complicates the use of historical
data for predicting future payouts.
Companies that offer insurance can
benefit from requiring compliance
with industrywide standards since the
domain of eligible artifacts is now narrowed, which simplifies predictions
about possible adverse incidents and
payouts. Good security standards also
will reduce the likelihood of adverse
incidents. However, any security standard would be equivalent to a list of approved components or allowed classes
of behavior. Such a list only can rule
out certain attacks and it can limit opportunities for innovation, so security
standards are unlikely to be popular
with software producers.
Deterrence. Finally, deterrence is
considerably less effective in cyber-
space than in the physical world. De-
tend to be less
convenient to use
intrude on usability.