back sessions between employees and
supervisors simply to force managers
and their direct reports to engage in
regular feedback sessions and to develop feedback skills. Though measurement was touted as the purpose
of the program, in reality it was much
less significant than the process.
knowledge) system versus non-func-tioning (“incorrect” knowledge) system.
As simple as they are, there are challenges to measurement even for these
At the core of systems development,
there are some truly basic attributes
that we need to measure. These attributes were summed up by Larry Putnam, Sr. and Ware Myers4 as:
˲ Size: usually of the delivered system in some useful metric. While few
people actually care how “big” a system is, size is actually a proxy for the
knowledge content that is itself a proxy
for the customer-realized value of a
IllustratIon by stuart bradFord
˲ Productivity: how effective the organization is in turning resources of
time and effort/cost into valuable software products.
˲ Time: how much calendar time is
required to build the system/deliver
˲ Effort/Cost: the amount of work
and money required to deliver the value.
˲ Reliability/Quality: the relative
amount of functioning (read “correct”
size: Measuring knowledge
At the core of all software size metrics
is this uncomfortable truth: what we
really want to measure is the knowl-
edge content of the delivered soft-
ware but there is no way to do that.
There is no empirical way of quan-
tifying knowledge. There is no unit
for knowledge. There is not even a
consistent definition of what knowl-
edge is. However, there are proxies
for the knowledge content that are
quite measurable. They are always
related to the substrate or medium
on which knowledge is deposited and
we inevitably end up measuring the
physical properties of this substrate.
All else being equal, a system that is
twice as “big” as another system will
contain more knowledge—approxi-
mately twice as much knowledge in
fact. This size might be counted using
many different units: requirements,
stories, use cases, and so forth. Each
of these units has itself some average
knowledge content or “size” coupled
with some variability or uncertainty
in that knowledge content. The size of
the unit times the number of units in-
dicates the size of the system. The un-
certainty in the size of the unit times
the uncertainty in the count of the
unit indicates the uncertainty in the
size of the system.
At the core of
there are some truly
basic attributes that
we need to measure.
This is a loaded term—it sounds like a
manufacturing units-per-hour metric
but in software it really references the
team’s or organization’s cooperative
ability to acquire knowledge—to learn.
It is this factor, more than any other,
which determines how effectively we
can build systems. We can usually see
when a project team is effective or inef-