This single
administrative
toolstack is an
artifact of the way
hypervisors have
been designed
rather than a
fundamental
limitation of
hypervisors
themselves.
ization attractive to cloud computing.
Statically partitioning resources affects
the efficiency and utilization of the system, as cloud providers are no longer
able to multiplex several virtual machines onto a single set of physical resources. As trusted platforms beneath
OSes, hypervisors are conveniently
placed to interpose on memory and device requests, a facility often leveraged
to achieve promised levels of security
and availability.
Live migration9 involves moving
a running virtual machine from one
physical host to another without interrupting its execution. Primarily used
for maintenance and load balancing, it allows providers to seamlessly
change virtual to physical placements
to better balance workloads or simply
free up a physical host for hardware or
software upgrades. Both live-migration
and fault-tolerant solutions rely on the
ability of the hypervisor to continually
monitor a virtual machine’s memory
accesses and mirror them to another
host. Interposing on memory accesses
also allows hypervisors to “
dedupli-cate,” or remove redundant copies,
and compress memory pages across
virtual machines. Supporting several
key features of cloud computing, virtualization will likely be seen in cloud deployments for the foreseeable future.
Small Enough?
Arguments for trusting the virtualization platform often focus on TCB size;
as a result, TCB reduction continues to
be an active area of research. While significant progress—from shrinking the
hypervisor to isolating and removing
other core services of the platform—
has been made, in the absence of full
hardware virtualization support for every device, the TCB will never be completely empty.
At what point is the TCB “small
enough” to be considered secure?
Formal verification is a technique to
mathematically prove the “correct-
ness” of a piece of code by comparing
implementation with a correspond-
ing specification of expected behav-
ior. Although capable of guaranteeing
an absence of programming errors,
it does only that; while proving the
realization of a system conforms to a
given specification, it does not prove
the security of the specification or the
system in any way. Or to borrow one
practitioner’s only somewhat tongue-
in-cheek observation: It “…only shows
that every fault in the specification
has been precisely implemented in
the system.”
31 Moreover, formal veri-
fication quickly becomes intractable
for large pieces of code. While it has
proved applicable to some microker-
nels,
19 and despite ongoing efforts
to formally verify Hyper-V,
22 no virtu-
alization platform has been shrunk
enough to be formally verified.
Software exploits usually lever-
age existing bugs to modify the flow
of execution and cause the program
to perform an unauthorized action.
In code-injection exploits, attackers
typically add code to be executed via
vulnerable buffers. Hardware security
features help mitigate such attacks by
preventing execution of injected code;
for example, the no-execute (NX) bit
helps segregate regions of memory
into code and data sections, disallow-
ing execution of instructions resident
in data regions, while supervisor mode
execution protection (SMEP) prevents
transferring execution to regions of
memory controlled by unprivileged, us-
er-mode processes while executing in a
privileged context. Another class of at-
tacks called “return-oriented program-
ming”
28 leverages code already present
in the system rather than adding any
new code and is not affected by these
security enhancements. Such attacks
rely on small snippets of existing code,
or “gadgets,” that immediately precede
a return instruction. By controlling the
call stack, the attacker can cause execu-
tion to jump between the gadgets as de-
sired. Since all executed code is original
read-only system code, neither NX nor
SMEP are able to prevent the attack.
While such exploits seem cumbersome
and impractical, techniques are avail-
able to automate the process.
17
Regardless of methodology, most
exploits rely on redirecting execution flow in an unexpected and undesirable way. Control-flow integrity
(CFI) prevents such an attack by ensuring the program jumps only to
predefined, well-known locations
(such as functions, loops, and conditionals). Similarly, returns are
able to return execution only to valid
function-call sites. This protection is
typically achieved by inserting guard