In spite of its strong asymptotic behavior, the large
exponent on the log factor makes this algorithm slow in
practice. The Spielman-Srivastava sparsification algorithm
offered no improvement—effective resistances do give
the ideal probabilities with which to sample edges for a
sparsifier, but computing them requires solving yet more
Laplacian linear systems.
Koutis et al. 24 removed this dependency problem by using
low-stretch trees to compute less aggressive sampling probabilities which are strictly greater than those suggested by effective resistances. This can be done more quickly, and along with
some other elegant ideas and fast data structures, is sufficient
to yield a Laplacian linear system solver which runs in time
O (m log n log log2 n log (1/ε)).
4. 2. fast sparsification algorithms
The algorithms from Section 3 certified the existence of
good sparsifiers, but run quite slowly in practice. Those
techniques have been significantly refined, and now there
are three major ways to produce sparsifiers quickly.
First, the bottleneck in sampling via effective resistances
is approximating the effective resistances themselves.
The Laplacian system solver of Koutis, Miller, and Peng
described above can be used to calculate those resistances,
which can then be used to sample the graph. The best analysis is given by Koutis et al. 23 who give an O (m log n log log
n log (1/ε) ) time algorithm for generating ( ( 1 + ε), O(log3 n/
ε2) )-spectral sparsifiers.
Second, the decomposition-and-sampling algorithm of
Spielman and Teng34 can be sped up by improving the local
clusterings used to create a decomposition. In the local
clustering problem, one is given a vertex and cluster size
as input, and one tries to find a cluster of low conductance
near that vertex of size at most the target size, in time proportional to the target cluster size. Faster algorithms for
local clustering have been developed by Andersen et al. 5
and by Andersen and Peres. 6
Third, unions of random spanning trees of a graph G
can make good cut-sparsifiers: the union of two random
spanning trees is (log n)-cut similar to G, 20 while the union
of O(log2 n/ε2) random spanning trees, reweighed proportionally to effective resistance, is ( 1 + ε)-cut similar to G. 18
Although it remains to be seen if the union of a small number of random spanning trees can produce a spectral sparsifier, Kapralov and Panigrahy showed that one can build a
( 1 + ε)-spectral sparsifier of a graph from the union of spanners of O(log4 n/ε4) random subgraphs of G. 21
4. 3. network algorithms
Fast algorithms for spectral sparsification and Laplacian
systems provide a set of powerful tools for network analysis.
In particular, they lead to nearly-linear time algorithms for
the following basic graph problems:
Approximate Fiedler Vectors33: Input: G = (V, E, w) and ε > 0.
Output: an ε-approximation of the second smallest eigenvalue,
λ2(LG), (also known as the Fiedler value) of LG, along with a vector
v orthogonal to the all 1s vector such that v T LGv ≤ ( 1 + ε)λ2(LG).
Electrical Flows13, 32, 33: Input: G = (V, E, w) where weights
are resistances, s, t ∈ V, and ε > 0. Output: an ε-approximation of the electrical flows over all edges when 1 unit of flow
is injected into s and extracted from t.
Effective Resistance Approximation32: Input: G = (V, E, w)
where weights are resistances and ε > 0. Output: a data
structure for computing an ε-approximation of the effective resistance of any pair of vertices in G. Data Structure:
O (n log n/ε2) space, and O(log/ε2n) query time, and O (m log2 n
log log2 n/ε2) preprocessing time.
Learning from labeled data on a graph35: Input a strongly
connected (aperiodic) directed graph G = (V, E) and a labeling
function y, where y assigns a label from a label set Y = { 1, − 1}
to each vertex of a subset S ⊂ V and 0 to vertices in V − S, and
a parameter µ. Output the function f : V → R that minimizes
W (f ) + m || f - y || 2,
where
and π is the stationary distribution of the random walk on
the graph with transition probability function p.
Cover Time of Random Walk16: Input: G = (V, E), Output: a
constant approximation of the cover time for random walks.
The algorithm for Electrical Flows led to a breakthrough for the following fundamental combinatorial optimization problem, for which the best previously known algorithm
ran in time Õ(mn1/2/ε), Õ(mn2/3 log ε− 1) and Õ(m3/2 log ε− 1).
Maximum Flows and Minimum Cuts13: Input: G = (V, E, w)
where w are capacities, s, t ∈ V, and ε > 0, Output: an
ε-approximation of s-t maximum flow and minimum cut.
Algorithm: Õ(mn1/3 · poly (1/ε) ) time.
Spectral graph sparsification also played a role in understanding other network phenomena. For example, Chierichetti
et al. 12 discovered a connection between rumor spreading in a
network and the spectral sparsification procedure of Spielman
and Teng, 34 and applied this connection to bound the speed of
rumor spreading that arises in social networks.
5. oPEn quEstion
The most important open question about spectral sparsification is whether one can design a nearly linear time algorithm
that computes (σ, d)-spectral sparsifiers for any constants
σ and d. The algorithms based on Theorem 7 are polynomial
time, but slo w. All of the nearly-linear time algorithms of which
we are aware produce sparsifiers with d logarithmic in n.
6. conclusion
Spectral sparsification has proved to be a remarkably useful tool in algorithm design, linear algebra, combinatorial