where eu denotes the elementary unit vector in direction u.
Since the edges in are a subset of the edges in G, its
Laplacian may also be expressed as a (differently weighted)
sum of the same outer products:
Suppose the non-zero eigenvalue-and-eigenvector pairs of
LG are (λ1, u1), …, (λn− 1, un− 1). Then we can write
Let be the Moore-Penrose Pseudoinverse of LG, that is,
Then is the projection matrix onto
the span of {ui}.
Then
and e [s(u, v)] = 1 for all (u, v) ∈ E, whence e [
] = LG. We now write:
where the Yi are i.i.d. random vectors sampled from the
distribution
Notice that e [ YY T] = Π so the expectation of each term is cor-
rect. To analyze the sum, we apply the following concentra-
tion theorem for sums of independent rank one matrices
due to Rudelson. 30
Theorem 6 (Rudelson). Let p be a probability distribution
over Ω ⊂ Rd such that supy∈Ω y 2≤ M and e [ yy T ] ≤ 1. Let
y1, …, yq be independent samples drawn from p with replace-
ment. Then,
To optimize the application of Rudelson’s theorem, we
choose the {p(u, v)} so that all possible values of Y have the
same norm, that is,
for some fixed γ, for all (u, v) ∈ E. An elementary calculation
reveals that the value of γ that causes the sum of the prob-
abilities to be one is . Thus if we sample according to
probabilities
then we can take in Theorem 6. This tells us
that q = O(n log n/ε2) samples are sufficient to obtain a
( 1 + ε)-spectral approximation with high probability, from
which Theorem 5 follows.
3. 4. twice-Ramanujan sparsifiers
In a nutshell, Spielman and Srivastava first reduced the sparsification of G = (V, E, w) to the following algebraic problem:
compute scalars {su, v ≥ 0|(u, v) ∈ E} such that = {e|su, v > 0}
has small cardinality, and
They then applied sampling, based on effective resistances,
to generate the {su, v} and .
Theorem 7 (Batson-Spielman-Srivastava). For every d > 1,
every undirected, weighted n-node graph G = (V, E, w) has a
In particular, G has a ( ( 1 + 2ε), 4/ε2)-spectral sparsifier, for every
0 < ε < 1.
At the heart of their construction is the following purely
linear algebraic theorem, which may be shown to imply
Theorem 7 by an argument similar to that in Section 3. 3.
Theorem 8. Suppose d > 1 and v1, …, vm ∈ Rn satisfy
where In is the n × n identity matrix. Then, there
exist scalars si ≥ 0 with |{i : si ≠ 0}| ≤ dn such that