this constraint. As long as routing costs are stable, nodes
can assume the gradient is consistent and avoid exchanging
unnecessary packets.
2. 3. a general mechanism
The examples above described how two very different protocols can both address a design tension by reducing a problem to maintaining data consistency. Both examples place
the same requirements on a data consistency mechanism:
it needs to resolve inconsistencies quickly, send few packets when data is consistent, and require very little state. The
Trickle algorithm, discussed in the next section, meets these
three requirements.
3. tRicKLe
The Trickle algorithm establishes a density-aware local
broadcast with an underlying consistency model that guides
when a node communicates. When a node’s data does not
agree with its neighbors, it communicates quickly to resolve the inconsistency. When nodes agree, they slow their
communication rate exponentially, such that in a stable
state nodes send at most a few packets per hour. Instead of
flooding a network with packets, the algorithm controls the
send rate so each node hears a small trickle of packets, just
enough to stay consistent. Furthermore, by relying only on
local broadcasts, Trickle handles network repopulation, is
robust to network transience, loss, and disconnection, and
requires very little state (implementations use 4– 11 bytes).
While Trickle was originally designed for reprogramming
protocols (where the data is the code of the program being
updated), experience has shown it to be a powerful mechanism that can be applied to wide range of protocol design
problems. For example, routing protocols can use Trickle to
ensure that nodes in a given neighborhood have consistent,
loop-free routes. When the topology is consistent, nodes
occasionally gossip to check that they still agree, and when
the topology changes they gossip more frequently, until they
reach consistency again.
For the purpose of clearly explaining the reasons behind Trickle’s design, all of the experimental results in
this section are from simulation, in some cases very high-level abstract simulators. In practice, Trickle’s simplicity
means it works remarkably well in the far more challenging and difficult real world. The original Trickle paper, 13 as
well as Deluge9 and DIP14 report experimental results from
real networks.
3. 1. algorithm
Trickle’s basic mechanism is a randomized, suppressive
broadcast. A Trickle has a time interval of length t and a
redundancy constant k. At the beginning of an interval, a
node sets a timer t in the range of t-, 2 t. When this timer fires,
the node decides whether to broadcast a packet containing metadata for detecting inconsistencies. This decision
is based on what packets the node heard in the interval before t. A Trickle maintains a counter c, which it initializes to
0 at the beginning of each interval. Every time a node hears
a Trickle broadcast that is consistent with its own state, it
increments c. When it reaches time t, the Trickle broadcasts
t Communication interval length
T Timer value in range t -, t
2
C Communication counter
K Redundancy constant
t Smallest t
l
t Largest t
h
if c < k. Randomizing t spreads transmission load over a single-hop neighborhood, as nodes take turns being the first
node to decide whether to transmit. Figure 3 summarizes
Trickle’s parameters.
3. 2. Scalability
Transmitting only if c < k makes a Trickle density aware, as
it limits the transmission rate over a region of the network
to a factor of k. In practice, the transmission load a node observes over an interval is O(k . log(d) ), where d is the network
density. The base of the logarithm depends on the packet
loss rate PLR: it is — 1 PL–R.
This logarithmic behavior represents the probability that
a single node misses a number of transmissions. For example, with a 10% loss rate, there is a 10% chance that a node will
miss a single packet. If a node misses a packet, it will transmit, resulting in two transmissions. There is correspondingly
a 1% chance a node will miss two packets from other nodes,
leading to three transmissions. In the extreme case of a 100%
loss rate, each node is by itself: transmissions scale linearly.
Figure 4 shows this scaling. The number of transmissions
scales logarithmically with density and the slope line (base
of the logarithm) depends on the loss rate. These results
come from a Trickle-specific algorithmic simulator we implemented to explore the algorithm’s behavior under controlled conditions. Consisting of little more than an event
queue, this simulator allows configuration of all of Trickle’s
parameters, run duration, and the boot time of nodes. It
models a uniform packet loss rate (same for all links) across
a single hop network. Its output is a packet send count.
figure 4: trickle’s transmissions per interval scales logarithmically
with density. the base of the logarithm is a function of the packet
loss rate (the percentages)
12
10
Transmissions/interval
8
6
4
2
60%
40%
20%
0%
0
1
2
4
8 16 32
Nodes