there by the designer but were created
by the physical implementation, often
unhelpfully sucking away signals or
power. Today we have parasitic computers. Many components have unintended computational power, which
can be perverted—from the x86 page-fault handler2 to DMA controllers.
16
This presents a challenge to understanding where all the computation is
happening, such as what is software
rather than hardware.
Toward Robustly Engineered
Trustworthy Systems
Total-system approaches to security
defenses are important (see, for example, Bellovin3). A further lesson
from physical-layer attacks is why
such attacks are not more of a threat
today—due to further layers of protection. It is not enough to extract
the cryptographic key from a banking
card using laser fault injection; the attacker must also use it to steal money.
At this point the bank’s system-level
defenses apply, such as transaction
limits and fraud detection. If the key
relates only to one account, the payoff
involves only money held by that customer, not all other customers. Ap-plication-level compartmentalization
limits the reward, and thus makes the
attack economically nonviable.
Another approach is to ensure that
richer contextual information is avail-
able that allows the hardware to under-
stand and enforce security properties.
The authors are on a team designing,
developing, and formally analyzing
the CHERI hardware instruction-set
architecture,
20 as well as CHERI oper-
ating system and application security.
The CHERI ISA can enable hardware to
enforce pointer provenance, arbitrarily
fine-grained access controls to virtual
memory and to abstract system ob-
jects, as well as both coarse- and fine-
grained compartmentalization. To-
gether, these can provide enforceable
separation and controlled sharing, al-
lowing trustworthy and untrustworthy
software (including unmodified legacy
code) to coexist securely. Since the
hardware has awareness of software
constructs such as pointers and com-
partments, it can protect them, and we
can reason about the protection guar-
antees—for example, formally proving
the architectural abstraction enforces
processor hardware (typically subject
to extensive verification) has long been
assumed to provide a solid foundation
for software, but increasingly suffers
from its own vulnerabilities. Second,
increasing complexity and the way sys-
tems are composed of many hardware/
software pieces, from many vendors,
means one cannot think just in terms
of a single-processor architecture. We
need to take a holistic view that ac-
knowledges the complexities of this
landscape. Third, and most seriously,
these new attacks involved phenomena
that cut across the traditional architec-
tural abstractions, which have inten-
tionally only described the envelopes
of allowed functional behavior of hard-
ware implementations, to allow imple-
mentation variation in performance.
That flexibility has been essential to
hardware performance increases—but
the attacks involve subtle information
flows via performance properties. They
expose the hidden consequences of
some of the microarchitectural inno-
vations that have given us ever-faster
sequential computation in the last de-
cades, as caching and prediction leads
to side-channels.
Hardware Vulnerabilities
Ideally, security must be built from the
ground up. How can we solve the problem by building the foundations of secure hardware?
For years, hardware security to many
people has meant focusing on the
physical layers. Power/electromagnetic
side-channels and fault injection are
common techniques for extracting
cryptographic secrets by manipulating
the physical implementation of a chip.
These are not without effectiveness,
but it is notable that the new spate of
attacks represents entirely different,
and more potent, attack vectors.
One lesson from the physical-layer
security community is that implemen-
tation is critical. Hardware definition
languages (HDLs) are compiled down
to connections between library logic
cells. The logic cells are then placed
and routed and the chip layer designs
produced. One tiny slip—at any level
from architecture to HDL source and
compiler, to cell transistor definitions,
routing, power, thermals, electromag-
netics, dopant concentrations and
crystal lattices—can cause a potentially
exploitable malfunction. Unlike the bi-
nary code of malware, there is no way to
observe many of these physical proper-
ties. As a result, systems are more vul-
nerable to both design mistakes and
supply-chain attacks.
As the recent attacks demonstrate,
side-channels are becoming more
powerful than expected. Traditional
physical-layer side-channels are a sig-nals-from-noise problem. If you record
enough traces of the power usage, with
powerful enough signal processing,
you can extract secrets. Architectural
side-channels have more bandwidth
and better signal-to-noise ratios, leaking much more data more reliably.
If we take a systems-oriented view,
what can we say about the problem?
First of all, the whole is often worse
than the sum of its parts. Systems are
composed of disparate components,
often sourced from different vendors,
and often granting much greater access
to resources than needed to fulfill their
purpose; this can be a boon for attackers. For example, in Google Project Zero’s attack on the Broadcom Wi-Fi chip
inside iPhones,
4 the attackers jumped
from bad Wi-Fi packets to installing
malicious code on the Wi-Fi chip, and
then to compromising iOS on the application processor. Their ability to use
the Wi-Fi chip as a springboard multiplied their efficacy. It is surprisingly
difficult to reason about the behavior of
such compositions of components.
5 Attackers may create new side-channels
through unexpected connections—for
example, a memory DIMM that can
send network packets via a shared I2C
bus with an Ethernet controller.
17
Hardware engineers often talk
about ‘parasitic’ resistance or capacitance—components that were not put
Designers need
to understand more
of what takes place
in layers above
or below their field
of expertise.