break the proof (or the tests) … ”
Upfront Activities and Architecture. Agile advocates building what is
needed now, using refactoring to defer
decisions. Refactoring must be cheap,
fast, and limit rework to source code.
Our principal weapon in meeting
non-functional requirements is system
(not just software) architecture, including redundancy and separation of
critical from non-critical. Such things
can be prohibitively expensive to refactor late in the day. We need just enough
upfront architecture work to argue satisfaction of key properties. We also do
a “What If?” exercise to ensure the proposed architecture can accommodate
The MULTOS CA project had some
extraordinary security requirements,
which were met by a carefully considered combination of physical, operational, and computer-based mechanisms. The software design was much
simplified as a result of this whole
system view. The physical measures
included the provision of a bank vault
and enclosing Faraday cage—hardly
items that we could have ignored and
then “refactored in” later.
User Stories and Non-Functional
Requirements. For security and safety,
we must ensure our specification covers all possible inputs and states. Agile uses stories to document requirements, but these sample behavior,
with no completeness guarantee. The
gaps between stories may contain vulnerabilities, bugs, unexpected termination, and undefined behavior. Meyer files user stories under “Bad and
Ugly,” and we agree.
For critical systems, we prefer a
(semi-)formal specification that offers some hope of completeness. The
C130J used Parnas tables to specify
critical functions. They seemed to
work well—they were simple enough
to be understood by system engineers,
yet sufficiently formal to be implemented and analyzed for correctness.
Sprint Pipeline. Agile usually requires a single active “Sprint,” delivered immediately to the customer, so
only two builds are ever of interest:
˲ Build N: in operation with the customer; used to report defects.
˲ Build N+1: the current development sprint.
This assumes the customer is al-
Background and Sources
Many consider Agile as beginning
1 but its roots are much older.
Many of XP’s core practices were well
established long ago—their combination and rigorous practice was novel.
A survey9 notes that both incremental and iterative styles of engineering
were used in the 1950s. Redmill’s work
on evolutionary delivery12 predicted
many of the problems faced by Agile
projects. Boehm2 provides some useful insight, while the development of
MULTOS CA6 compared Correctness-by-Construction with XP,
that the two were not such strange
bedfellows after all.
Lockheed Martin developed the
Mission Computers for the C130J by
combining semi-formal specification,
strong static verification, iterative development, and a strongly Lean mindset.
11 Use of Agile has been reported by
5 while SINTEF have
reported success with SafeScrum.
recent and plain-speaking evaluation
of Agile comes from Meyer,
he does not specifically deal with high-integrity issues.
Agile Assumptions and Issues
How do Agile’s practices and assumptions match real high-integrity projects? Here are some of the most obvious clashes. For each issue, we start
with a brief recap of the practice in
question, then go on to describe the issue or perceived clash, followed by our
ideas and experiences in overcoming
it. Where possible, we close each section with an example of our experience
from the C130J, MULTOS, or iFACTS
Dependence on “Test”
Agile calls for continuous integration, with a regression test suit, and a
test-first development style, with each
function associated with specific tests.
Meyer calls these practices “brilliant”
in his summary analysis,
10 but Agile assumes that dynamic test is the principal (possibly only) verification activity,
saying when refactoring is complete,
or when the product is good enough to
The safety-critical community hit
the limits of testing long ago. Ultra-
reliability cannot be claimed from
“lots of testing.” Security is even more
such as HeartBleed—defy an arbitrari-
ly large amount of testing and use. In
high-integrity development, we use di-
verse forms of verification, including
checklist-driven reviews, automated
static verification, traceability analy-
sis, and structural coverage analysis.
There is no barrier between these
verification techniques and Agile, especially with an automated integration pipeline. We try to use verification techniques that complement, not
repeat each other. If possible, we advocate for sound static analyses (tools
that find all the bugs, not just some of
them), since this gives greater assurance and reduces pre-test defect density. With careful consideration of the
assumptions that underpin the static,
we can reduce or entirely remove later
The NATS iFACTS system4 augments
the software tools available to air-traffic controllers in the U.K. It supplies
electronic flight-strip management,
trajectory prediction, and medium-term conflict detection for the U.K.’s
en-route airspace, giving controllers
substantially improved ability to plan
ahead and predict potential loss-of-separation in a sector. The developers
precede commit, build, and testing
activities with static analysis using the
SPARK toolset. Overnight, the integration server rebuilds an entire proof of
the software, populating a persistent
cache, accessible to all developers the
next morning. Working on an isolated
change, the developers can reproduce the proof of the entire system in
about 15 minutes on their desktop machines, or in a matter of seconds for a
change to a single module. While Agile
projects might have a “don’t break the
tests” mantra, on iFACTS it’s “don’t
How do Agile’s