in industry. A senior director from
IBM at Hursley came up to me after
the lecture and invited me to put
my commitment where my mouth
was, and do something in collaboration with IBM. That made me gulp
a bit, because I had the impression
that IBM had produced some pretty
complicated software, and this really would be a challenge. There was
a good chance that we would fall flat
on our faces. But you can’t turn down
an opportunity like that. So some
colleagues and I set to work, and
actually produced some very useful
analyses for them using the Z notation of Jean-Raymond Abrial to help
them with an ongoing project for
restructuring and rewriting parts of
their popular customer information
software system, CICS. That work
eventually led to a Queen’s award for
the transputer was
another example of a very
practical application of
your theoretical ideas.
The INMOS Transputer was as embodiment of the ideas that I described
earlier, of building microprocessors
that could communicate with each
other along wires that would stretch
between their terminals. The founder
had the vision that the CSP ideas were
ripe for industrial exploitation, and he
made that the basis of the language for
programming Transputers, which was
When they came to develop the
second version of their Transputer
that had a floating- point unit attached, my colleagues Bill Roscoe
and Jeff Barrett at Oxford actually
used formal models of communicating processes, and techniques of program verification, to check that their
designs for the implementation of
the IEEE floating-point specification
were in fact correct. The company
estimated it enabled them to deliver
the hardware one year earlier than
would otherwise have happened.
They applied for and won a Queen’s
award for technological achievement, in conjunction with Oxford
University Computing Laboratory,
for that achievement. It still stands
out as one of the first applications of
formal methods to hardware design.
i expect the future
What projects are you
to be as wonderful
as the past has
been. there’s still
amount of interesting
work to do.
working on at microsoft?
One of them is the pursuit of my lifetime goal of verification of programs.
I was very interested in the ideas of assertions, which had been put forward
by Bob Floyd and before; already in
1947 the idea of an assertion was described by Alan Turing in a lecture he
gave to the London Mathematical Society. This idea of assertions lies at the
very basis of proving programs correct,
and at the very basis of my ideas for defining the semantics of programming
languages. I already knew when I entered academic life way back in 1968
that these ideas would be unlikely
to find commercial exploitation, really, throughout my academic career.
I could look forward to 30 years of research uninterrupted by anybody who
actually wanted to apply its results.
I thought that when I retired,
it would be very interesting to see
whether the positive side of my prediction would also come true, that these
ideas would begin to be applied. And
indeed, even since I’ve joined Microsoft in 1999, I’ve seen quite a bit of expansion in their use of assertions and
other techniques for improving confidence in the reliability of programs.
There are program analysis tools now
that stop far short of actually proving
correctness of programs, but they’re
very good at detecting certain kinds of
errors. Some quite dangerous errors,
which make the software vulnerable
to intrusions and virus attacks, have
been detected and removed as a result
of the use of formal techniques of program analysis.
The idea of verifying computer
programs is still an idea for the future, although there are beginnings
of using scientific technology to make
the programs more reliable. The full
functional verification of a computer
program against formally specified requirements is still something we have
to look forward to in the future. But
the progress that we’ve made has really been quite spectacular in the last
My other project is concurrency.
I’ve set myself the challenge of understanding and formalizing methods for
programming concurrent programs
where the programs actually share the
store of the same computer, rather
than being executed on distinct computers as they are in the communicating sequential process architecture.
Again, the motivation for studying this
different form of concurrency is the
advance of hardware technology: it
now appears that the only way in which
processors can get faster is for them to
include more processing units on the
same chip, and share the same store.
I’d always felt that parallel programs
that actually shared main memory, and
could interleave their actions at a very
fine level of granularity—just a single
memory access—were far too difficult
for me. I could see no real prospect
of working out a theory that would
help people to write correct programs
to exploit this capability. I thought it
would be interesting to try again, and
see whether the experience in formalization that has been built up over the
last 20 or 30 years could be applied effectively to this extremely complicated
form of programming.
What is on the horizon for
I expect the future to be as wonderful as
the past has been. There’s still an enormous amount of interesting work to
do. As far as the fundamental science
is concerned, we still certainly do not
know how to prove programs correct.
We need a lot of steady progress in this
area, which one can foresee, and a lot
of breakthroughs where people suddenly find there’s a simple way to do
something that everybody hitherto has
thought to be far too difficult.
edited by Len Shustek, chair, computer history museum,
mountain View, ca.