The concurrency rule is the main rule
of CSL. In applying CSL to languages
with dynamic thread creation instead
of parbegin/-parend different rules are
needed, but the basic point that separation allows independent reasoning
about processes carries over.
SL’s concurrency rule took inspira-
tion from the “disjoint concurrency
rule” of Hoare.
23 Hoare’s rule used ∧ in
place of together with side conditions
to rule out interference.b allows us to
extend its applicability to pointer struc-
tures. But even without pointers, the
CSL rule is more powerful. Indeed, upon
seeing CSL
Hoare immediately exclaimed to the
author: “We can prove parallel quick-
sort!” A direct proof can be given using
to recognize and unite disjoint array
partitions.
30
Frames, Footprints,
and Local Reasoning
The previous section describes how the
separating conjunction leads to simple
proofs of the individual steps of heap
mutations, and how the frame rule embeds reasoning about small chunks of
memory within larger memories. Here,
the rules' more fundamental role as a basis for scalable reasoning is explained.
I illustrate by reasoning about a recursive program for deleting the nodes
in a binary tree. Consider the C program
in ( 1) of Figure 5. This program satisfies the specification in ( 2) of the figure,
where the tree predicate says that its argument points to a binary tree in memory. The predicate is defined recursively
in ( 3), with a diagram below depicting
what is described by the else part of the
definition. Note that here we are using a
“points-to” predicate root [l : x, r : y]
for describing records with l and r fields.
The use of emp in the if branch of
the definition means that tree(r) is true
of a heaplet that contains all and only
the cells in the tree; there are no ad-
ditional cells. Thus, the specification
of deletetree(r) does not mention
nodes not in in the tree. This is analo-
gous to what we did with the small axi-
oms for basic statements in Figure 3,
b There are variable conditions in some pre-
sentations of SL, that can technically be done
away with eliminated by using a version of
that separates variables as well as heap.
34 This
article glosses over this issue.
and is a typical pattern in SL reasoning:
“small specifications” are used which
mention only the cells touched by the
program component (its footprint).
The critical part of the proof of the
program is presented in ( 4), where the
precondition at the beginning is ob-
tained by unwinding the recursive defi-
nition using the if condition root = 0.
The proof steps then follow the intuitive
description of the algorithm: the first
recursive call deletes the left subtree,
the second call deletes the right sub-
tree, and the final statement deletes the
root node. In the pictured reasoning,
the overall specification of the proce-
dure is applied as an induction hypoth-
esis at each call site, together with the
Frame Rule for showing that the parts
not touched by recursive calls are left
unchanged. For instance, the asser-
tions for the second recursive call are
an instance of the Frame Rule with the
triple {tree(right)} deletetree(right)
{emp} as the premise.
The simplicity of this proof comes
about because of the principle of local
reasoning. The frame rule allows in-
place reasoning for larger-scale opera-
tions (entire procedures) than individual
heap mutations. And it allows the speci-
fication to concentrate on the footprint
of a procedure instead of the global state.
Put contrapositively, the deletetree
procedure could not be verified without
the frame rule, unless we were to compli-
cate the initial specification by including
some representation of frame axioms
(saying what does not change) to enable
the proofs at the recursive call sites.
This reasoning uses a tree predicate
suitable for reasoning about memory safety; it mentions that we have a
tree, but not what data it holds. For
functional correctness reasoning, it
is typical to use inductive predicates
that connect memory structures to
mathematical entities. In place of tree
(root) we could have a predicate tree (τ,
root) that says root points to an area of
memory representing the mathematical binary tree τ, where a mathematical tree is either empty or an atom or
a pair of trees. We could then specify
a procedure for copying a tree using a
postcondition of the form
that says we have two structures in mem-
ory representing the same mathemati-
cal tree. An assertion like this would tell
us that we could mutate one of the trees
without affecting the other (at which
point they would cease to represent the
same tree).
For data structures without much
sharing, such as variations on lists and
trees, reasoning in SL is reminiscent
of reasoning about purely functional
programs: you unroll an inductive definition, then mutate, then roll it back
up. Inductive definitions using and
mutation go well together. The first SL
proof to address complex sharing was
done by Yang in his Ph.D. thesis, where
he provided a verification of the classic
Schorr-Waite graph-marking algorithm.
The algorithm works by reversing links
during search, and then restoring them
later: A space-saving representation of
the stack of a recursive algorithm. Part
of the main invariant in Yang’s proof is
–**
capturing the idea that if you replace
the list of marked nodes by a restored
list, then you get a spanning tree. Yang’s
proof reflected the intuition that the algorithm works by a series of local surgeries that mutate small parts of the
structure: The proof decomposed into
verifications of the surgeries, and ways
of combining them.
The idiomatic use of –* in assertions
of the form A (B –* C) to describe generalized update was elevated to a general
principle in work of Hobor and Villard.
25
They give proofs of a number of programs with significant sharing, including graphs, dags, overlaid structures (for
example, a list overlaying a tree), and
culminating in the copying algorithm in
Cheney’s garbage collector.
Many papers on SL have avoided –*,
often on the grounds that it complicates
automation and is only needed for programs with significant sharing. However, –* is recently making something of
a comeback. For example, it is used routinely as a basic tool in the Iris higher-order logic.
29
Concurrency, Ownership,
and Separation
The concurrency rule in Figure 3 says:
To prove a parallel composition we give
each process a separate piece of state,
and separately combine the postcon-