test. In this case, the system under test
would not be an entire software system
but rather just the protocols described
in the documentation, meaning the
team could focus on modeling the protocols’ state and behavior and then target the tests that followed on just those
levels of the stack of interest for testing
purposes.
A team at Microsoft Research had
been experimenting with model-based
testing since 2002 and had applied it
successfully, albeit on a much smaller
scale, to a variety of testing situations—
including the testing of protocols for
Microsoft’s Web Services implementation. In the course of those initial
efforts, the Microsoft Research team
had already managed to tackle some
of the thorniest concerns, such as for
the handling of nondeterminism. They
also had managed to create a testing
tool, Spec Explorer, which would prove
to be invaluable to the Winterop team.
BInDER: Please say a little about how you
came to settle on model-based testing
as an appropriate testing methodology.
GRIESKAMP: In looking at the problem from the outset, it was clear it was
going to be something huge that required lots of time and resources. Our
challenge was to find a smart technology that would help us achieve quality
results while also letting us optimize
our use of resources. A number of
people, including some of the folks on
the Technical Committee, suggested
model-based testing as a promising
technology we should consider. All of
that took place before either Nico or I
joined the team.
The team then looked around to
find some experts in model-based test-
ing, and it turned out we already had
a few in Microsoft Research. That led
to some discussions about a few test
cases in which model-based testing
had been employed and the poten-
tial the technology might hold for this
particular project. One of those test
cases had to do with the SMB (Server
Message Block) file-sharing protocol.
The results were impressive enough to
make people think that perhaps we re-
ally should move forward with model-
based testing. That’s when some of us
with model-based testing experience
ended up being brought over from Mi-
crosoft Research to help with the vali-
dation effort.
Microsoft encountered challenges
because of its choice to adopt model-based testing for the project. On the
one hand, the technology and methodology Microsoft Research had developed seemed to fit perfectly with
the problem of testing protocol documents. On the other hand, it was an
immature technology that presented
a steep learning curve. Nonetheless,
with the support of the Technical Committee, the team decided to move forward with a plan to quickly develop the
technology from Microsoft Research
into something suitable for a produc-tion-testing environment.
Not surprisingly, this did not prove
easy. In addition to the ordinary setbacks that might be expected to crop
up with any software engineering project on an extremely tight deadline, the
Microsoft protocol documentation
team faced the challenge of training
hundreds of test developers in China
and India on the basics of a new, unfamiliar testing methodology.
Even after they had a cadre of well-trained testers in place, many hurdles
still remained. While the tool-engi-neering team faced the pressure of
stabilizing and essentially produc-tizing the Spec Explorer software at
breakneck speed, the testing team had
to start slogging through hundreds
of documents, extracting normative
statements, building requirements
specifications, and constructing models to generate automated test suites.
Although Spec Explorer provides a way
to automate tests, there still were several important steps in the process that
required human judgment. These ar-