tant yet subtle aspects of an engineering discipline is understanding how to
think about it—the underlying attitude
that feeds insight. In the same way that
failure motivates and informs dependability principles, cyberattack motivates and informs cybersecurity principles. Ideas on how to effectively defend
a system, both during design and operation, must come from an understanding of how cyberattacks succeed.
Rationale. How does one prevent attacks if one does not know the mechanism by which attacks succeed? How
does one detect attacks without knowing how attacks manifest? It is not possible. Thus, students of cybersecurity
must be students of cyberattacks and
adversarial behavior.
Implications. Cybersecurity engineers and practitioners should take
courses and read books on ethical
hacking. They should study cyberattack and particularly the post-attack
analysis performed by experts and
published or spoken about at conferences such as Black Hat and DEF CON.
They should perform attacks within
lab environments designed specifically to allow for safe experimentation. Lastly, when successful attacks
do occur, cybersecurity analysts must
closely study them for root causes and
the implications to improved component design, improved operations,
improved architecture, and improved
policy. “Understanding failure is the
key to success” {07.04}. For example,
the five-whys analysis technique used
by the National Transportation Safety
Board (NTSB) to investigate aviation
accidents9 is useful to replicate and
adapt to mining all the useful hard-earned defense information from the
pain of a successful cyberattack.
• Espionage, sabotage, and influence
are goals underlying cyberattack {06.02}.
Description. Understanding adversaries requires understanding their
motivations and strategic goals. Adversaries have three basic categories
of goals: espionage—stealing secrets
to gain an unearned value or to destroy value by revealing stolen secrets;
sabotage—hampering operations to
slow progress, provide competitive ad-
vantage, or to destroy for ideological
purposes; and, influence—affecting
decisions and outcomes to favor an ad-
versary’s interests and goals, usually at
riosity and to attract more computer
scientists and engineers to this impor-
tant problem area. The ordering here is
completely different than in the book
so as to provide a logical flow of the pre-
sented subset.
Each primary principle includes a
description of what the principle entails, a rationale for the creation of the
principle, and a brief discussion of the
implications on the cybersecurity discipline and its practice.
• Cybersecurity’s goal is to optimize
mission effectiveness {03.01}.
Description. Systems have a primary
purpose or mission—to sell widgets,
manage money, control chemical
plants, manufacture parts, connect people, defend countries, fly airplanes, and
so on. Systems generate mission value
at a rate that is affected by the probability of failure from a multitude of causes,
including cyberattack. The purpose of
cybersecurity design is to reduce the
probability of failure from cyberattack
so as maximize mission effectiveness.
Rationale. Some cybersecurity engineers mistakenly believe that their
goal is to maximize cybersecurity under
a given budget constraint. This excessively narrow view misapprehends the
nature of the engineering trade-offs
with other aspects of system design and
causes significant frustration among
the cybersecurity designers, stakeholders in the mission system, and senior
management (who must often adjudicate disputes between these teams). In
reality, all teams are trying to optimize
mission effectiveness. This realization
places them in a collegial rather than
an adversarial relationship.
Implications. Cybersecurity is always
in a trade-off with mission functionality, performance, cost, ease-of-use and
many other important factors. These
trade-offs must be intentionally and
explicitly managed. It is only in consideration of the bigger picture of optimizing mission that these trade-offs
can be made in a reasoned manner.
• Cybersecurity is about understand-
ing and mitigating risk {02.01}.
Description. Risk is the primary met-
ric of cybersecurity. Therefore, under-
standing the nature and source of risk is
key to applying and advancing the disci-
pline. Risk measurement is foundation-
al to improving cybersecurity { 17.04}.
Conceptually, cybersecurity risk is
simply the probability of cyberattacks
occurring multiplied by the potential
damages that would result if they actu-
ally occurred. Estimating both of these
quantities is challenging, but possible.
Rationale. Engineering disciplines
require metrics to: “characterize the
nature of what is and why it is that
way, evaluate the quality of a system,
predict system performance under
a variety of environments and situa-
tions, and compare and improve sys-
tems continuously.”
7 Without a met-
ric, it is not possible to decide whether
one system is better than another.
Many fellow cybersecurity engineers
complain that risk is difficult to mea-
sure and especially difficult to quan-
tify, but proceeding without a metric
is impossible. Thus, doing the hard
work required to measure risk, with
a reasonable uncertainty interval, is
an essential part of the cybersecurity
discipline. Sometimes, it seems that
the cybersecurity community spends
more energy complaining how diffi-
cult metrics are to create and measure
accurately, than getting on with creat-
ing and measuring them.
Implications. With risk as the pri-
mary metric, risk-reduction becomes
the primary value and benefit from any
cybersecurity measure—technological
or otherwise. Total cost of cybersecu-
rity, on the other hand, is calculated in
terms of the direct cost of procuring,
deploying, and maintaining the cyber-
security mechanism as well as the in-
direct costs of mission impacts such
as performance degradation, delay to
market, capacity reductions, and us-
ability. With risk-reduction as a benefit
metric and an understanding of total
costs, one can then reasonably compare
alternate cybersecurity approaches in
terms of risk-reduction return on in-
vestment. For example, it is often the
case that there are no-brainer actions
such as properly configuring existing
security mechanisms (for example, fire-
walls and intrusion detection systems)
that cost very little but significantly re-
duce the probability of successful cy-
berattack. Picking such low-hanging
fruit should be the first step that any
organization takes to improving their
operational cybersecurity posture.
• Theories of security come from
theories of insecurity {02.03}.
Description. One of the most impor-