To view the accompanying paper,
visit doi.acm.org/10.1145/3306208
of randomization—the Isolation Lemma. This lemma asserts there is a randomized algorithm to assign weights to
the edges of a bipartite graph such that
the minimum-weight perfect matching in the graph is unique—just assign
to each edge a weight independently
and randomly from a set of integers
that is twice the number of edges in the
graph. Amazingly, this randomized algorithm does not get to see the graph.
Derandomizing the isolation lemma
is tantamount to asking the following
question—could we also find such a
weight assignment when we can, in addition, no longer toss coins?
The main result in this paper is the
construction of a list of weight assignments via an almost-polynomial, parallel and deterministic algorithm (which
also does not look at the graph) that
has the property that for any bipartite
graph of a given size, at least one of
the weight assignments isolates the
minimum weight perfect matching in
it. The end result is a gem that comes
very close to solving an important open
problem, makes an elegant connection
between graph theory and derandomization, has been used to make progress on a few other important questions and, as a bonus, the proof is from
“The Book.”
References
1. Agrawal, M., Kayal, N. and Saxena, N. PRIMES. P. Ann.
Math. 160, 2 (2004), 781–793.
2. Kabanets, V. and Impagliazzo, R. Derandomizing
polynomial identity tests means proving circuit lower
bounds. In Proceedings of the 35th Annual ACM Symp.
Theory of Computing (June 9–11, 2003, San Diego,
CA, USA), 355–364.
3. Karp, R.M., Upfal, E. and Wigderson, A. Constructing
a perfect matching is in random NC. In Proceedings
of the 17th Annual ACM Symp. Theory of Computing,
(May 6–8, 1985, Providence, RI, USA), 22–32.
4. Mulmuley, K., Vazirani, U.V. and Vazirani, V. V. Matching
is as easy as matrix inversion. Combinatorica 7, 1
(1987), 105–113.
5. Solovay, R. and Strassen, V. A fast Monte-Carlo test
for primality. SIAM J. Comput. 6, 1 (1977), 84–85.
6. Valiant, L.G. A scheme for fast parallel
communication. SIAM J. Comput. 11, 2 (1982),
350–361.
Nisheeth K. Vishnoi is a professor of computer science at
Yale University, New Haven, CT, USA.
Copyright held by author.
MATCHINGS IN BIPARTITE graphs play a
foundational role in a variety of mathematical, scientific, and engineering
disciplines—from Frobenius’ work on
the determinants of matrices, Gale and
Shapley’s influential paper on stable
marriages and college admissions,
Tolstoi and Kantorovich’s work on
the optimal transportation problem,
to the online world where advertisements are being matched to users billions of times a day. As a consequence,
algorithms for finding matchings in
bipartite graphs also have a century-old history and the pursuit of efficient
algorithms for this problem has led to
major achievements in computer science and optimization.
In the 1980s, with the growing availability of parallelizable computer architectures, the study of whether one can
parallelize algorithms for fundamental
problems gained significant momentum. An efficient parallel algorithm
would distribute the work on several
processors in a manner that keeps the
longest sequence of dependent tasks
among processors small—ideally, logarithmic in the size of the problem. Several basic problems such as multiplying
matrices, solving a system of equations
and computing shortest paths in graphs
already had such parallel algorithms.
For the bipartite matching prob-
lem, however, it turned out that all al-
gorithms developed so far were inher-
ently sequential in nature and, as such,
were not amenable to parallelization.
In 1985, Karp, Upfal, and Wigderson3
presented an efficient parallel algo-
rithm for the problem. However, there
was a caveat: their algorithm was ran-
domized, that is, it needed access to
independent coin tosses. This result
was soon followed by a more efficient
algorithm by Mulmuley, Vazirani, and
Vazirani in 19874 who, using an old al-
gebraic characterization of matchings
by Tutte, reduced the problem of com-
puting matchings in graphs to com-
puting determinants of matrices—the
latter problem is known to have an effi-
cient parallel algorithm. However, the
MVV algorithm, while very different
from that of Karp et al., also made cru-
cial use of randomness in its reduction
to computing determinants.
This was not the first instance of a
problem in which randomness seemed
to help—checking whether a number
is prime or not was already a notorious
example. In 1977, Solovay and Strassen5
discovered a randomized algorithm
for primality testing but no efficient algorithm that did not use randomness
(called a deterministic algorithm) was
discovered until 30 years later. In 1982,
Valiant6 showed that a natural routing
problem on a network had an efficient
randomized algorithm yet every deterministic algorithm for the problem was
necessarily inefficient. Research on the
power of randomness culminated in a
surprising result by Kabanets and Impagliazzo in 20032—removing randomness from certain efficient algorithms
can show the non-existence of efficient
algorithms themselves—a question
that is a whisker away from the P versus
NP question.
Thus, understanding the power
of randomness in computation very
quickly evolved from being a curiosity
to being of profound interest in theoretical computer science.
Whether there exists a deterministic algorithm for primality testing
or a deterministic parallel algorithm
for bipartite matching remained two
outstanding questions at the frontiers
of our understanding of the role of
randomness in computation. The former was solved in 2001 in a remarkable
paper by Agrawal, Kayal and Saxena.
1
And the latter has been (nearly) recently
resolved in the following paper by
Fenner, Gurjar and Thierauf.
The authors show the randomized
parallel algorithm of MVV can be converted into a deterministic parallel
algorithm. At the heart of the MVV approach was an extremely powerful use
Technical Perspective
Isolating a Matching When
Your Coins Go Missing
By Nisheeth K. Vishnoi
research highlights
DOI: 10.1145/3306210