their view or to make the project more
challenging. Coman and Ronen4
estimated the amount of work-force
time wasted over these phenomena
exceeds 30%. Needed is a change in
attitude. Managers and programmers
alike, in and out of IT, are responsible
for challenging functionality and features they view as over-required, over-specified, or overdesigned. These issues should be raised multiple times
during the delivery cycle. For example, during the kick-off meeting, over-requirements should be identified
and eliminated from the scope of the
system to be developed. In team meetings concerning risk management,
control gates, and design reviews,
over-requirement, over-specification,
and overdesign should likewise be
identified and eliminated.
25/25 practice. The 25/25 rule
says management should attempt
to discontinue and stop work on approximately 25% of the projects in the
pipeline. In the remaining projects,
unneeded or over-required features
(approximately 25%) should be removed. All software solutions in the
pipeline should be examined on a
quarterly basis by top management
through the focusing matrix. Business situations might have changed,
and value creation might be lowered
significantly. Likewise, for some software solutions the remaining delivery
costs end up being much higher than
expected. In such cases, where projects lose their value-creation potential, top management must stop project delivery, disregarding the “sunk
costs” already invested in them. In
some organizations, many projects
can be eliminated this way, freeing up
to 25% of the IT division’s budget or
capacity.
Similarly, the team must scrutinize
most complicated and costly software
solutions remaining in the pipeline to
detect over-requirements, over-specifications, and overdesigns and eliminate them. This practice removes unnecessary functionality and features,
reducing the cost of delivering these
solutions by up to 25%.
Split large and risky solutions
among releases. In organizations
where software is launched in peri-
odical (such as quarterly) releases, we
recommend refraining from develop-
ing and implementing large software
solutions in a single release, especial-
ly when delivery involves significant
business and technical uncertainty.
Splitting the software solution, if pos-
sible, among two or three consecutive
releases ensures the requesting divi-
sion gains a better understanding of
its needs and solution requirements;
technical risk is also reduced. This
practice yields a better fit with busi-
ness needs while reducing the hazard
of rework due to changes in require-
ments and the probability of eventual
neglect of the software solution.
Performance measurement. Sys-
tematic measurement of performance
is a powerful, proven means to en-
hance performance, including avoid-
ance of waste. However, to be effec-
tive, performance measures must be
properly linked to the goal of value
creation.
Performance measures are not sup-
posed to be “perfect” or “scientific”
or cover all extreme cases. Defining
perfect measures is generally difficult
or impossible. The main purpose of
performance measures is to enable
improvement over time. Measures that
are not perfect yet make sense and are
measured in a consistent manner over
time are good enough. Though other
performance measures can be added
as long as they are in line with the or-
ganization’s overall business goal, we
suggest a basic set of seven periodical
performance measures covering most
operational aspects of IT:
Throughput of IT division. T = total
estimated value creation of software
solutions delivered during the mea-
surement period;
Productivity of IT division. Prod =
the amount of CR-equivalent units de-
veloped during the measurement pe-
riod; a possible definition is the total
of {number of large software solutions
multiplied by 9 + number of medium-
size software solutions multiplied by
3 + number of change requests} deliv-
ered during the measurement period;
Operating expenses. OE = TCO ex-
penses for IT during the measurement
period;
Work in process. WIP1 = number of
software solutions open in develop-
ment at the measurement instance;
WIP2 = average number of released ac-
tivities per developer in the IT division
at the measurement instance;
Lead time. LT = average time span
from requirements introduction until
delivery for all large software solutions
during the measurement period;
Quality. Q1 = number of critical
defects detected during the first six
months following delivery for all soft-
ware solutions; Q2 = average scope
stabilityf of all software solutions de-
livered during the measurement pe-
riod; and
Due-date performance. DDP1 = per-
centage of software solutions delivered
on time during the measurement peri-
od; DDP2 = percentage of development
activities delivered on time during the
measurement period.
These measures are solid perfor-
mance indicators and useful in creating
an effective incentives system for the
head and managers of the IT division.
Short lead times. Long lead times
calling for over-requirements and un-
necessary changes in scope can be
prevented by substantially shortening
project lead times through practices
we discuss later.
Net value creation for scope-change
requests. To prevent introduction
of unnecessary requests for scope
change, the approval criterion is the
existence of positive net value creation
discussed earlier.
These seven remedies help avoid
waste within an IT division. To understand their potential for productivity
enhancement consider the following
example IT division whose professionals experience approximately 50%
f Scope stability reflects the extent of changes
introduced into the scope definition of the
software solution.
Figure 4. Effective vs. ineffective time
(before improvement).
Ineffective time
50%
Effective time
50%