figure 2. Randomly placed agents evolve to form shapes of
dimension 2, then 1, and finally 0.
10
5
0
1. 5
1.0
0.5
0.0
10
5
0
along a path. Any two mobile agents are joined in Gt whenever
their distance is less than 1.
3. 2. bounds
The s-energy measures the total length of all the edges for s = 1
and counts their number for s = 0; the latter is usually infinite, so it is sensible to ask how big En(s) can be for 0 < s ≤
1. On the lower bound front, we have En( 1) = W(1/r) n/2 and
En(s) = s1−n(1/ r)W(n), for any n large enough, s ≤ s0, and any fixed
positive s0 < 1. Of course, the s-energy is useful mostly for its
upper bounds5:
3. 3. Why the -energy?
Let convP denote the convex hull of the points formed by
the rows of the matrix P. We have the “Russian doll” nesting
structure (Figure 3):
For reversible sequences and any 0 < s ≤ 1.
( 2)
( 3)
( 4)
The literature on stochastic matrices features a variety of
coefficients of ergodicity to help us measure how quickly the
Russian dolls deflate: eigenvalues, joint spectral radius,
width, diameter, volume, and so on22. This is how the convergence of products of stochastic matrices, which includes
the whole subject of Markov chain mixing, is established. By
seeking progress at each step, however, these methods cannot cope with stalling. The total s-energy gets around this by
factoring time out (as a Fourier transform would with a time
series) and producing a global deflation measure integrated
over the whole time horizon.
The s-energy is controlled by a single parameter s, which
we can adjust at will to get the most out of the inequality
Ce ≤ e−s E(s), typically choosing s so that (dE/ds)|s = E ln e. We
sketch the proof of ( 2), beginning with the case s < 1. The
argument relies on an importance device: the flow tracker.
Think of it as breadth-first search in a dynamic graph. A little
imagery will help. Pick agent 1 and dip it in water, keeping all
the other agents dry. Whenever an edge of Gt links a dry agent
to a wet one, the dry one gets wet. As soon as all the agents
become wet (if ever), dry them all except agent 1; repeat.
where . This is essentially
optimal. Fix an arbitrarily small e > 0. A step t is called trivial
if |xi(t) − xj(t)| < e for each (i, j ) ∈ Gt. The maximum number Ce
of nontrivial steps is bounded by e−s En(s); hence,
which is optimal if e is not too small. Convergence in the
reversible case is polynomial: if e < r/n, then x(t) − p Tx 2 ≤ e,
for t = O(r− 1 n2|log e|). This bound is optimal. In particular,
we can specialize it to the case of random walks in undirected graphs and retrieve the usual mixing times.
[ 1] t0 ← 0.
[ 2] Repeat forever:
[ 2. 1] Wt0 ← { 1}.
[ 2. 2] For t = t0, t0 + 1, . . . , ∞:
Wt+ 1 ← Wt ∪ { i | ∃ (i, j) ∈ Gt & j ∈ Wt }.
[ 2. 3] If |W∞| = n then t0 ← min{t > t0 : |Wt| = n} else stop.
Let Wt denote the set of wet agents at time t, which always
includes agent 1. The assignments of t0 in step [ 2. 3] divide
the timeline into epochs, time intervals during which either
all agents become wet or, failing that, the flow tracker