but to a sequence of networks via successive readjustment
of edges. In these algorithms, some of the resulting systems are significantly harder than “typical” instances, capturing—in some sense—the hardness of the optimization
problems themselves.
Current SDD solvers are empirically fast for some engineering applications, but they are not able to efficiently solve
most cases of SDD linear systems. Besides these practical
limitations, the fact that existing SDD solvers lack guarantees on arbitrary instances limits their implications to the
theory of algorithms as well.
These factors underline the need for “mathematically
certified” solvers that are provably fast for arbitrary SDD
linear systems, independently of their origin, be it—for
instance—social or protein networks. This paper describes
our state of the art solver for SDD linear systems.
1. 1. A glimpse at the solver
The class of SDD linear systems arises in particular in the
study of electrical networks, which provide us with a concept crucial to understanding how our algorithm works:
the effective resistance between two points in a network.
In addition, the connection to networks enables adopting a
second alternative view of our linear system, as a weighted
graph. We give the details in Section 2.
We then move to the algebraic component of our solver.
The approximate solution of linear systems via iterative
methods is a topic not commonly encountered in computer science but thoroughly studied in the context of
numerical linear algebra and scientific computation.
Section 3 explains iterative methods via an analogy with
the computation of the inverse of a real number in a calculator with a broken division key, where only addition and
multiplication operations are available. This leads us to
preconditioning, a term first used by Alan Turing. In the
graph theoretic context, preconditioning provides a measure of similarity between graphs. This measure is used to
formalize design conditions sufficient for the construction
of a fast iterative method.
What distinguishes our solver from classical iterative
methods is its combinatorial component and specifically
the use of graph theoretic algorithms. It was understood
before our work that the key to a fast solver is finding a
subgraph (the preconditioner) which is similar to a given
graph, but has substantially fewer edges.
20 Our contribution
is a conceptually simple procedure for constructing good
preconditioners, this is the topic of Section 4.
The base of our preconditioner is a spanning tree of
the input graph, in other words a minimally connected
subgraph. Our algorithm needs a special type of spanning
tree called a low-stretch tree (LSST) which we describe in
Section 4. 1. This can be found using very sophisticated but
fast algorithms.
To get a better preconditioner, we perform random sampling: each edge of the input graph is put into the preconditioner with a specified probability. It was known that the
effective resistance between the two endpoints of each edge
provides a good sampling probability for it.
18 Unfortunately
the problem of computing the effective resistance seems to
require solving an SDD linear system, which is the problem
we are trying to solve in the first place.
Our main contributions are two ideas that allow us to
circumvent this “chicken and egg” problem. The first idea
is to use an upper estimate on the effective resistance for
each edge. The second idea is to compute these estimates
on a modified graph, in which the estimates are sufficiently
good. The modification is in fact quite simple; we find an
LSST of the graph and increase the weight all of its edges.
To compute the upper estimate for the effective resistance
of an edge in the modified graph we only use the edges of
the LSST. A key side effect of this modification is that the
number of non-tree edges in the preconditioner is much
less than the number of edges in the original graph. In this
way we meet the known design conditions and obtain the
faster solver.
2. nEt WoRKS, SyStEmS, SoLvERS
Let us consider the problem of finding a voltage setting given
the desired net current flow at each of the vertices. A simple
three-node example of an electric network is depicted in
Figure 3. The inverse of the resistance of wire, also known
as conductance, is a direct analogue to the edge weight in
a graph; because of that we choose to label each wire by
its conductance rather than its resistance. Setting the
voltages of the vertices to some values leads to an electrical flow
through the edges. There are two fundamental principles
governing this voltage setting. (a) Kirchhoff’s law, which
states that with the exception of the vertices where current
is injected/extracted, the net flow at each vertex is zero.
(b) Ohm’s law, which states that the current on an edge
equals the voltage difference between its endpoints times
the conductance of the wire.
As an example consider the network given in Figure 3
where we set the voltages at the three vertices to be x1, x2, and
x3 respectively. By Ohm’s law we get that the current flows
along edges 1 → 2 and 1 → 3 are 1 · (x1 − x2) and 2 · (x1 − x3),
respectively. Therefore the amount of current we will need to
inject into vertex 1 to maintain these voltages is:
1 · (x1 − x2) + 2 · (x1 − x3) = 3x1 − x2 − 2x3
Identities for the required current entering/leaving vertices
2 and 3 can also be derived similarly. Therefore, if we want
one unit of current to enter at vertex 1 and leave at vertex
3, the voltages will need to satisfy the following system of
linear equations:
Using more compact notation, linear systems assume the
form Ax = b where x is a n × 1 column vector of unknowns,
Figure 3. A simple network and linear system.