plying it to a mission-critical problem
and getting a lot of people up to speed
just as fast as possible—that was really
something.
BInDER: What did these documents
contain, and what were they intended
to convey?
GRIESKAMP: They’re actually similar
to the RFCs (request for comments)
used to describe Internet protocol standards, and they include descriptions
of the data messages sent by the protocol over the wire. They also contain
descriptions of the protocol behaviors
that should surface whenever data is
sent—that is, how some internal data
states ought to be updated and the sequence in which that is expected to occur. Toward that end, these documents
follow a pretty strict template, which is
to say they have a very regular structure.
BInDER: How did your testing approach compare with the techniques
typically used to verify specifications?
GRIESKAMP: When it comes to testing
one of these documents, you end up
testing each normative statement contained in the document. That means
making sure each testable normative
statement conforms to whatever it is
the existing Microsoft implementation
for that protocol actually does. So if the
document says the server should do X,
but you find the actual server implementation does Y, there’s obviously a
problem.
In our case, for the most part, that
would mean we’ve got a problem in
the document, since the implementation—right or wrong—has already
been out in the field for some time.
That’s completely different from the
approach typically taken, where you
would test the software against the
spec before deploying it.
BInDER: Generally speaking, a protocol refers to a data-formatting standard and some rules regarding how
the messages following those formats
ought to be sequenced, but I think the
protocols we’re talking about here go a
little beyond that. In that context, can
you explain more about the protocols
involved here?
GRIESKAMP: We’re talking about net-
work communication protocols that
apply to traffic sent over network con-
nections. Beyond the data packets
themselves, those protocols include
many rules governing the interactions
between client and server—for ex-
ample, how the server should respond
whenever the client sends the wrong
message.
Because of the project’s unique constraints, the protocol documentation
team needed to find a testing methodology that was an ideal fit for their
problem. Early efforts focused on collecting data from real interactions between systems and then filtering that
information to compare the behaviors
of systems under test with those described in the protocol documentation. The problem with this approach
was that it was a bit like boiling the
ocean. Astronomical amounts of data
had to be collected and sifted through
to obtain sufficient information to cover thoroughly all the possible protocol
states and behaviors described in the
documentation—bearing in mind that
this arduous process would then have
to be repeated for more than 250 protocols altogether.
Eventually the team, in consultation
with the U.S. Technical Committee responsible for overseeing their efforts,
began to consider model-based testing. In contrast to traditional forms of
testing, model-based testing involves
generating automated tests from an
accurate model of the system under