portunities for a system design and implementation to be exposed and subverted along its entire life cycle. Early
development work is rarely protected
very carefully. System components are
often reused from previous projects or
open source. Malicious changes can
easily escape notice during system integration and testing because of the complexity of the software and hardware in
modern systems. The maintenance and
update phases are also vulnerable to
both espionage and sabotage. The adversary also has an opportunity to
stealthily study a system during operation by infiltrating and observing the
system, learning how the system works
in reality, not just how it was intended
by the designer (which can be significantly different, especially after an appreciable time in operation). Second,
the potential failure from making too
weak of an assumption could be catastrophic to the system’s mission, whereas making strong assumptions merely
could make the system more expensive.
Clearly, both probability (driven by opportunity) and prudence suggest making the more conservative assumptions.
Implications. The implications of
assuming the adversary knows the system at least as well as the designers and
operators are significant. This principle means that cybersecurity designers
must spend a substantial amount of
resources: Minimizing the probability
of flaws in design and implementation
through the design process itself, and
performing extensive testing, including penetration and red-team testing
focused specifically on looking at the
system from an adversary perspective.
The principle also implies a cybersecurity engineer must understand the
residual risks in terms of any known
weaknesses. The design must compensate for those weaknesses through
architecture (for example, specifically
focusing the intrusion detection system to monitor possible exploitation of
those weaknesses), as opposed to hoping the adversary does not find them
because they are “buried too deep”
or, worse yet, because the defender
believes that the attacker is “not that
sophisticated.” Underestimating the
attacker is hubris. As the saying goes:
pride comes before the fall {06.04}.
Assuming the attacker is (partially)
inside the system requires the designer
the expense of those of the defender.
Rationale. Understanding the strategic goals of adversaries illuminates
their value system. A value system suggests in which attack goals a potential
adversary might invest most heavily in,
and perhaps give insight into how they
will pursue those goals. Different adversaries will place different weights on
different goals within each of the three
categories. Each will also be willing to
spend different amounts to achieve
their goals. Clearly, a nation-state intelligence organization, a transnational
terrorist group, organized crime, a
hacktivist and a misguided teenager
trying to learn more about cyberattacks
all have very different profiles with respect to these goals and their investment levels. These differences affect
their respective behaviors with respect
to different cybersecurity architectures.
Implications. In addition to informing the cybersecurity designer and operator (one who monitors status and
controls the cybersecurity subsystem
in real time), understanding attacker
goals allows cybersecurity analysts to
construct goal-oriented attack trees
that are extraordinarily useful in guiding design and operation because they
give insight into attack probability and
attack sequencing. Attack sequencing,
in turn, gives insight into getting ahead
of attackers at interdiction points within the attack step sequencing { 23. 18}.
• Assume your adversary knows your
system well and is inside it {06.05}.
Description. Secrecy is fleeting and
thus should never be depended upon
more than is absolutely necessary
{03.05}. This is true of data but applies even more strongly with respect
to the system itself {05.11}. It is unwise to make rash and unfounded assumptions that cannot be proven with
regard to what a potential adversary
may or may not know. It is much safer
to assume they know at least as much
as the designer does about the system.
Beyond adversary knowledge of the system, a good designer makes the stronger assumption that an adversary has
managed to co-opt at least part of the
system sometime during its life cycle.
It must be assumed that an adversary
changed a component to have some degree of control over its function so as to
operate as the adversary’s inside agent.
Rationale. First, there are many op-
It is much better
to assume
adversaries know
at least as much as
the designer does
about the system.