to modify data in the system (including
by any system calls it might perform)
the access control policy does not explicitly allow it to modify. For instance,
in Figure 2, the only authority component A has over another component is
the send right to the endpoint from
which component B receives. This
means the maximum state change A
can effect in the system is in A itself
and in B’s thread state and message
buffer. It cannot modify any other parts
of the system.
The dual of integrity is confidentiality, which states that a component
cannot read another component’s
data without permission, 29 proved the
stronger property of intransitive noninterference for seL4; that is, given a
suitably configured system (with stronger restrictions than for integrity), no
component is able to learn information
about another component or its execution without explicit permission. The
proof expresses this property in terms
of an information-flow policy that can
be extracted from the access-control
policy used in the integrity proof. Information will flow only when explicitly
allowed by the policy. The proof covers explicit information flows, as well
as potential in-kernel covert storage
channels, but timing channels are outside its scope and must be addressed
through different means. 6
Further proofs about seL4 include
the extension of functional correct-
ness, and thus the security theorems,
to the binary level for the ARMv7 ar-
chitecture35 and a sound worst-case
execution time profile for the kernel2, 34
necessary for real-time systems. The
seL4 kernel is available for multiple ar-
chitectures—ARMv6, ARMv7, ARMv7a,
ARMv8, RISC-V, Intel x86, and Intel
x64—and its machine-checked proof33
is current on the ARMv7 architecture
for the whole verification stack, as well
as on ARMv7a with hypervisor exten-
sions for functional correctness.
Security by Architecture
The previous section summarized the
seL4 kernel software engineers can
use as a strong foundation for provably trustworthy systems. The kernel
forms the bottom layer of the trusted
computing base (TCB) of such systems. The TCB is the part of the software that needs to work correctly for
the security property of interest to
hold. Real systems have a much larger
TCB than just the microkernel they
run on, and more of the software stack
would need to be formally verified to
gain the same level of assurance as for
the kernel. However, there are classes
of systems for which this is not necessary, for which the kernel-level isolation theorems are already enough to
enforce specific system-level security
properties. This section includes an
example of such a system.
The systems for which this works
are those in which component architectures alone already enforce the critical property, potentially together with a
few small, trusted components. Our example is the mission-control software
of a quadcopter that was the research-demonstration vehicle in the HACMS
program mentioned earlier.
Figure 3 outlines the quadcopter’s
main hardware components. It is in-
tentionally more complex than needed
for a quadcopter, as it is meant to be
page tables, and frame objects, or thin
abstractions over the corresponding
entities of the processor architecture.
Each thread has a designated “VSpace”
capability that points to the root of the
thread’s address-translation object
tree. Capabilities themselves are man-
aged by the kernel and stored in ker-
nel objects called “CNodes” arranged
in a graph structure that maps object
references to access rights, analo-
gous to page tables mapping virtual to
physical addresses. Each thread has a
distinguished capability identifying
a root CNode. We call the set of capa-
bilities reachable from this root the
thread’s “CSpace.” Capabilities can
be transmitted over endpoints with
the grant operation and can be shared
via shared CSpaces. Figure 2 outlines
these kernel objects on an example.
Security proofs. With its generality,
seL4’s kernel API is necessarily low-level
and admits highly dynamic system architectures. Direct reasoning about
this API can thus be a challenge.
The higher-level concept of access
control policies abstracts away from
individual kernel objects and capabilities, capturing instead the access-control configuration of a system via a set of
abstract “subjects” (think components)
and the authorities each has over the
others (such as to read data and send
a message). In the example in Figure 2,
the system would have components A
and B with authority over the endpoint.
Sewell et al. 36 proved for such suitable access control policies that seL4
enforces two main security properties:
authority confinement and integrity.
Authority confinement states that
the access control policy is a static (
unchanging) safe approximation of the
concrete capabilities and kernel objects in the system for any future state
of execution. This property implies that
no matter how the system develops, no
component will ever gain more authority than the access control policy predicts. In Figure 2, the policy for component B does not contain write access to
component A, and B will thus never be
able to gain this access in the future.
The property thus implies that reasoning at the policy level is a safe approximation over reasoning about the concrete access-control state of the system.
Integrity states that no matter what
a component does, it will never be able
seL4 design and code development took two person-years. Adding up all seL4-
specific proofs over the years comes to a total of 18 person-years for 8,700 lines
of C code. In comparison, L4Ka::Pistachio, another microkernel in the L4 family,
comparable in size to seL4, took six person-years to develop and provides no
significant level of assurance. This means there is only a factor 3. 3 between verified
software and traditionally engineered software. According to the estimation
method by Colbert and Boehm, 8 a traditional Common Criteria EAL7 certification
for 8,700 lines of C code would take more than 45. 9 person-years. That means
formal binary-level implementation verification is already more than a factor of
2. 3 less costly than the highest certification level of Common Criteria yet provides
significantly stronger assurance.
In comparison, the HACMS approach described here uses only these existing proofs
for each new system, including the proofs generated from tools. The overall proof effort
for a system that fits this approach is thus reduced to person-weeks instead of years,
and testing can be significantly reduced to only validating proof assumptions.
Proof Effort