figure 7. the atom c of the SP maps via f to a cell intersecting
two atoms.
f
f
c
f
b
f 3(b)
f (b)
f2(b)
example of a chaotic diffusive influence system with n = 4
and d = 1: the first two agents stay on opposite sides of the
origin; at any time, the one further from the origin moves
to their midpoint, that is,
Agent 3 is fixed at x3 = 1. Agent 4 moves midway toward agent 1
if the latter is moving and midway toward agent 3 if agent 2
is the one moving. To see why the system has positive topological entropy (the usual sign of chaos), it suffices to consider the infinite bit string s0s1s2 . . . , where st = 0/1 depends
on whether agent 1 or 2 is moving at time t. If agent 2 is initialized at x2 = 1 and agent 1 anywhere in (− 1, 0), the string
matches the binary expansion of 1/( 1 − x1); in other words,
predicting the t-th step requires knowing the initial placement of agent 1 with an error exponentially small in t.
5. 1. energy vs. entropy
As in the Ising model, the system mediates a clash
between two opposing “forces”: one, caused by the map’s
discontinuities, is “entropic” and leads to chaos; the other
one, related to the Lyapunov exponents, is energetic and
pulls the system toward an attracting set within which the
dynamics is periodic. The goal is to show that, outside a
vanishingly small “critical” region in parameter space,
entropy always loses. What does it mean? If, unlike in
Figure 7, the iterated image of any ball b never intersected
the SP hyperplanes, it would merrily bounce around until
eventually periodicity kicked in. In the figure, f 3(b) does
not oblige and splits into two smaller bodies. Both will
bounce around until possibly splitting again and so on.
If this branching process gets out of control, chaos will
ensue. To squelch this entropic process and induce periodicity, we have the stochasticity of the matrices on our
side: it causes the ball b to shrink and dissipate energy.
Unlike the classical Ising model, however, the system has
a single phase outside the critical region.
Entropy against energy: which one will win? For entropy
to lose out, the ball b must avoid splitting up too frequently.
This can be expressed by an (infinite) system of linear
inequalities. Feasibility hinges on a type of matrix rigidity
question: in this case, given a certain matrix, how many rows
must be removed before we can express the first column as
a linear combination of the others? Periodicity requires that
this number be high. The matrix in question is extracted
from the system’s stochastic matrices and the SP equations,
hence is highly structured.
Our sketchy explanation skipped over the main source
of difficulty. The ball b in fact does not shrink in all directions. Take ( 1, . . . , 1)T: it is a principal right-eigenvector
for the eigenvalue 1, so we would not see much contraction in that direction. Worse, the noncontracting principal eigenspace can have arbitrarily high dimension dt. To
prove that the critical region has measure zero requires
a firm analytical grip on dt. At time t, the dimension dt is
equal to the number of essential communicating classes in
the Markov chain P( f t(x)). To keep track of dt, a number
that can change constantly, we extend the flow tracker to
dynamic directed graphs.
6. an aLGoRithmic caLcuLus
As the system evolves, we are in a position to isolate certain
subsystems and treat them recursively. If, during a certain
time interval, the communication network consists of two
dynamic subgraphs A, B with no directed edges from B to A,
then we can break down the system during that period into
B and C, where C consists of A together with the contraction
of B into a single vertex. This recursive treatment (“dynamic
renormalization”) can only be accomplished algorithmically since we do not know the structure of the recursive
systems ahead of time—what with edges coming and going
all the time. (Observe the difference with the Ising model,
where the particles sit on a lattice and the coarse-graining
can be structured prior to action.) We mention another difference, this time with classical algorithms. The decision
tree of influence systems (called the coding tree) is infinite,
so any perturbation has rippling effects that translate into
infinitely many conditions; this explains the need to deal
with infinite series such as the total s-energy or infinite sets
of algebraic conditions as in the matrix rigidity problem
mentioned earlier.
By scale invariance and convexity, we may confine the
phase space to [0, 1]n. Let Md denote the union of all the SP
hyperplanes shifted by a small d. It is useful to classify the
initial states by how long it takes their orbits to hit Md , if
ever. With f 0 = in and min 0 / = ∞, we define the label l(x) of
x ∈ [0, 1]n as the minimum integer t such that f t (x) ∈ Md. The
point x is said to vanish at time l(x) if its label is finite. The
points that do not vanish before time t form the set St: we
have S0 = [0, 1]n; and, for t > 0,
Each of St’s connected components is specified by a set of
strict linear inequalities in Rn, so St is a union of disjoint
open n-cells, whose number we denote by #St. Each cell of St + 1
lies within a cell of St. The limit set S∞, = ∩ t ≥ 0 St collects the
points that never vanish. We say that the system is nesting
at t if St = St + 1. The minimum value of t (or ∞) is called the
nesting time v of the system. Labels cannot be skipped: if k