tion subject to linear constraints
(Figure 1). Dantzig’s simplex method
is an algorithm from the 1940s that
solves linear programs using greedy
local search on the vertices on the
solution set boundary, and variants
of it remain in wide use to this day.
The enduring appeal of the simplex
method stems from its consistently
superb performance in practice. Its
running time typically scales modest-
ly with the input size, and it routinely
solves linear programs with millions
of decision variables and constraints.
This robust empirical performance
suggested the simplex method might
well solve every linear program in a
polynomial amount of time.
In 1972, Klee and Minty showed by
example that there are contrived linear
programs that force the simplex meth-
od to run in time exponential in the
number of decision variables (for all of
the common “pivot rules” for choosing
the next vertex). This illustrates the first
potential pitfall of worst-case analysis:
overly pessimistic performance pre-
dictions that cannot be taken at face
value. The running time of the simplex
method is polynomial for all practical
purposes, despite the exponential pre-
diction of worst-case analysis.
To add insult to injury, the first
worst-case polynomial-time algo-
rithm for linear programming, the
ellipsoid method, is not competitive
with the simplex method in practice.c
c Interior-point methods, developed five years
later, lead to algorithms that both run in
worst-case polynomial time and are competi-
tive with the simplex method in practice.