For code, these features include
problematic idioms, the types of false
positives encountered, the distance
of a dialect from a language standard,
and the way the build works. For de-
velopers, variations appear in raw abil-
ity, knowledge, the amount they care
about bugs, false positives, and the
types of both. A given company won’t
of the tool builder, since the user and
the builder are the same person. De-
ployment leads to severe fission; us-
ers often have little understanding of
the tool and little interest in helping
develop it (for reasons ranging from
simple skepticism to perverse reward
incentives) and typically label any error
message they find confusing as false. A
Such champions make sales as easily as
their antithesis blocks them. However,
since their main requirements tend to
be technical (the tool must work) the
reader likely sees how to make them
happy, so we rarely discuss them here.
deviate in all these features but, given
the number of features to choose from,
often includes at least one weird oddity. Weird is not good. Tools want expected. Expected you can tune a tool to
handle; surprise interacts badly with
tuning assumptions.
Second, in the lab the user’s values,
knowledge, and incentives are those
tool that works well under these con-
straints looks very different from one
tool builders design for themselves.
tool after buying it. The trial is a pre-sale
demonstration that attempts to show
that the tool works well on a potential
customer’s code. We generally ship a
salesperson and an engineer to the customer’s site. The engineer configures
the tool and runs it over a given code
base and presents results soon after. Initially, the checking run would happen