3. 3. Synchronization
The scaling shown in Figure 4 assumes that all nodes are
synchronized, such that the intervals during which they are
awake and listening to their radios line up perfectly. Inevitably, this kind of time synchronization imposes a communication, and therefore energy, overhead. While some networks
can provide time synchronization to Trickle, others cannot.
Therefore, Trickle is designed to work in both the presence and
absence of synchronization.
Trickle chooses t in the range of (t-, 2 t] rather than (0, t] because the latter causes the transmission load in unsynchronized networks to scale with O(d). This undesirable scaling
occurs due to the short listen problem, where some subset of
motes gossip soon after the beginning of their interval. They
listen for only a short time, before anyone else has a chance to
speak up. This is not a problem if all of the intervals are synchronized, since the first gossip will quiet everyone else. However, if nodes are not synchronized, a node may start its interval
just after another node’s broadcast, resulting in missed messages and increased transmission load.
Unlike loss, where the extra O(log(d)) transmissions keep
the worst case node that missed several packets up to date, the
additional transmissions due to the short listen problem are
completely wasteful. Choosing t in the range of (t-, 2 t] removes
this problem: it defines a “listen-only” period of the first half of
an interval. A listening period improves scalability by enforcing
a simple constraint. If sending a message guarantees a silent
period of some time T that is independent of density, then the
send rate is bounded above (independent of the density). When
a mote transmits, it suppresses all other nodes for at least the
length of the listening period. Figure 5 shows how a listen period of t- 2. bounds the total sends in a lossless single-hop network
to be 2k. With loss, transmissions scale as O(2k . log(d) ) per interval, returning scalability to the O(log(d) ) goal.
3. 4. controlling t
A large t (gossiping interval) leads to a low communication
overhead, but propagates information slowly. Conversely,
figure 5: Without a listen-only period, trickle’s transmissions scale
with a square root of the density when intervals are not synchronized. With a listen-only period of duration -t2, the transmissions
per interval asymptotically approach 2k. the black line shows how
trickle scales when intervals are synchronized. these results are
from lossless networks.
8 16 32
64 128 256
figure 6: trickle pseudocode.
Receive consistent data
Receive inconsistent data
*t is picked from the range [t-, 2 t]
Double t, up to t . Reset c, pick a new t*
If c < k, transmit
set t to t . Reset c, pick a new t
a small t imposes a higher communication overhead, but
propagates data more quickly. These two goals, rapid propagation and low overhead, are fundamentally at odds: the former requires communication to be frequent, while the latter
requires it to be infrequent.
By dynamically scaling t, Trickle can quickly make data
consistent with a very small cost. t has a lower bound, t , and
an upper bound t . When t expires without a node receiv-
ing a new update, t doubles, up to a maximum of t . When
a node detects a data inconsistency (e.g., a newer version
number in dissemination, a gradient constraint violation in
collection), it resets t to be t .
Essentially, when there is nothing new to say, motes gossip infrequently: t is set to t . However, as soon as a mote
hears something new, it gossips more frequently, so those
who have not heard the new data receive it quickly. The chatter then dies down, as t grows from t to t .
By adjusting t in this way, Trickle can get the best of both
worlds: rapid consistency and low overhead when the network is consistent. The cost per inconsistency (shrinking t)
is approximately log(-t-h 2 ) additional sends. For a t of 1 s and l l
a t of 1 h, this is a cost of 11 packets to obtain a 3000-fold
decrease in the time it takes to detect an inconsistency (or,
from the other perspective, a 3000-fold decrease in mainte-
nance overhead). The simple Trickle policy, “every once in a
while, transmit unless you have heard a few other transmis-
sions,” can be used both to inexpensively maintain that the
network is consistent as well as quickly inform nodes when
there is an inconsistency.
Figure 6 shows pseudocode for the complete Trickle algorithm.
3. 5. case study: maté
Maté is a lightweight bytecode interpreter for wireless sensornets. 11 Programs are tiny sequences of optimized bytecodes. The Maté runtime uses Trickle to install new programs in a network, by making all nodes consistent to the
most recent version of a script.
Maté uses Trickle to periodically broadcast version summaries. In all experiments, code routines fit in a single packet ( 30 bytes). The runtime registers routines with a Trickle
propagation service, which then maintains all of the necessary timers and broadcasts, notifying the runtime when it
installs new code. Maté uses a very simple consistency resolution mechanism. It broadcasts the missing routines three
times: 1, 3, and 7 s after hearing there is an inconsistency.
Figure 7 shows simulation results of Maté’s behavior during
a reprogramming event. These results come from the TOSSIM
simulator, 12 which simulates entire sensornet applications and