TC: Since you say this tooling for
code reviews is something everybody
at Microsoft now uses, can you give us
a brief description of the features it offers and how you think those compare
with what is available to most people
outside of Microsoft?
JC: Well, we’re talking now about
things we did with our tool [called Code
Flow] a few years ago, and tooling has
a way of converging out in the world at
large over that much time. So, some of
the changes we made back then might
now seem fairly obvious to people who
are using other code-review tools that
have since come to work in much the
same way.
The brief summary is that we made
a number of changes to finely tune the
underlying subsystem. We also trained
the tool to be super-precise in terms
of tracking changes as people move
through numerous software iterations.
That is, as you move from one revision
to the next, you can imagine that your
code changes end up moving around
as some code gets deleted, some new
lines are added, and chunks of code
are shuffled around. That can throw
your comment tracking severely out
of sync with what you had once intended. Overcoming that took work,
but we now know from feedback that
it’s greatly appreciated and thus well
worth the effort.
Another thing we focused on was
performance. For that reason, even
today CodeFlow remains a tool that
works client-side, meaning you can
download your change first and then
valuable, did they also let you know
what else they wanted?
CB: What people wanted for the
most part was the ability to do their
own tracking, along with a way to look
at how they were doing in comparison to other teams. We came up with
metrics that align with some of the
targets teams at Microsoft have for
what they want to achieve at different
points in the software development
process. For example, they would
want to know if they were on track for
getting a commit into master within
a month. Or they would want to see if
they were well on their way to achieving 80% test coverage.
Similarly, for code review some
teams had targets, while others did not
since they didn’t have a way to measure that. So, they might decide that
at least two people should sign off on
every code review and that each review
would have to be completed within a
24-hour period. Until we started collecting the data around code reviews,
analyzing it, and then making it more
generally available, teams had no way
of measuring that. Yet they wanted to
be able to do that since they were already measuring other parts of their
development process. As a consequence, people started coming to tell
us what metrics they would find useful.
Then we would just add those to metrics we were already collecting. It turns
out that much of our effort was actually
driven by what the development teams
themselves were telling us they wanted
to be able to measure.
JACEK CZER WONKA
One of the most
interesting things
to surface from
instrumenting
CodeFlow was
just how much
time people were
actively spending in
the review tool.