was no possibility of them spotting a
balloon. These two factors played a
key role in the success of the MIT approach, as illustrated by the fact that
the depth of the tree of invites went up
to 15 people, and approximately one
of three tweets spreading information
about the team originated outside the
U.S. Distributing the reward money
more broadly motivated a much larger
number of people (more than 5,000)
to join the team, including some from
outside of the U.S. who could be rewarded for simply knowing someone
who could find a balloon. This strategy
combined the incentive of personal
gain with the power of social networks
to connect people locating each balloon with the MIT team.
The MIT team received more than
200 submissions of balloon sightings,
of which 30 to 40 turned out to be accurate. Given the considerably noisy
submission data, including deliberate attempts at misdirection, the team
had to develop a strategy to accurately
identify the correct sightings. It did
not have time to build a sophisticated
machine-learning system to automate
the process, nor did it have access to
a trusted human network to verify
balloon sightings. Instead, most of
its strategies relied on using human
reasoning to analyze the information
submitted with the balloon sightings
and eliminate submissions with inconsistencies.
The first strategy was to observe the
patterns of submissions about a certain
balloon site. Since the balloons were
all located in public spaces, each one
tended to elicit multiple submissions.
Multiple submissions at a specific location increased the probability of a report being accurate. However, those deliberately falsifying balloon sightings
also submitted multiple sightings for
each false location. To filter out these
submissions, the team observed differ-ing patterns in how balloon locations
were reported (see Figure 3). Multiple
submissions about a real balloon location tended to differ a little from one
another, reflecting natural variation
in representing a certain location: address, crossroads, nearby landmarks.
Malicious submissions tended to have
identical representations for a single
location, making them suspicious.
Another simple strategy the team
used involved comparing the IP ad-
dress of the submission with where
a balloon was reported found; for ex-
ample, one submission reporting a
balloon in Florida came from an IP ad-
dress in the Los Angeles area. A simple
IP trace and common sense filtered
out such false submissions.
Many submissions included pictures, some contrived to confirm misleading submissions. Most altered pictures involved shots of a balloon from
a distance and lacked the DARPA agent
holding the balloon and the DARPA
banner (an unannounced detail). Figure 4 shows examples of authentic and
Figure 3. typical real (top) and false (bottom) locations of balloons, with bottom map
depicting five submissions with identical locations.
Figure 4. typical real (left) and contrived (center and right) pictures of balloons.