These events offer several lessons. The first is that
digitization drives botification; the use of
technology in a realm of human
activity enables the creation of
software to act in lieu of humans.
A second lesson is that when
they become sufficiently sophisticated, numerous, and embedded within the human systems
within which they operate, these
automated scripts can significantly
shape those human systems. While
the May 2010 event was a widely
observed manifestation of the
influence of socialbots, it is quite
likely that most robotic activity in
the stock market goes completely
unnoticed, and high-frequency
trading firms—and their financial
robots—exert considerable hidden influence over the pricing and
behavior of the marketplace, as
well as the humans who gain and
lose value in that marketplace.
Social robots (socialbots)—
software agents that interact on social
networking services (SNSs)—have
been receiving attention in the
press lately. Automated scripts
have been used in email, chat
rooms, and other platforms for
online interactions in the past.
What distinguishes these “social”
bots from their historical predecessors is a focus on creating
substantive relationships among
human users—as opposed to
financial resources—and shaping the aggregate social behavior
and patterns of relationships
between groups of users online.
The gains and losses will be
in the realm of social capital
rather than financial capital,
but the stakes are just as high.
The ethical stakes are similarly
high. While much has been made
about the dark side of social robotics, several positive applications
of this technology are emerging.
Swarms of bots could be used to
heal broken connections between
infighting social groups and bridge
existing social gaps. Socialbots
could be deployed to leverage
peer effects to promote more civic
engagement and participation in
elections [ 5]. Sufficiently advanced
groups of bots could detect erroneous information being spread
virally in an SNS and work in
concert to inhibit the spread of
that disinformation by countering with well-sourced facts [ 6].
Moreover, the bots themselves
may significantly advance our
understanding of how relationships form on these platforms,
and of the underlying mechanisms
that drive social behaviors online.
Despite these potential benefits,
it would be naive not to consider
that the technology may also
enable novel malicious uses. The
same bots that can be used to
surgically bring together communities of users can also be used to
shatter those social ties. The same
socialbot algorithms that might
improve the quality and fidelity
of information circulated in social
networks can be used to spread
misinformation. Moreover, the
fact that many of these automated
systems operate as if they were real
humans almost reflexively brings
up the many questions around the
deceptive qualities of the technology. The ethical questions raised
by the use and potential abuse
of socialbots makes this type of
research a concern both within
and beyond the academic setting.
This is not a resolved issue. On
the backdrop of this significant
and continuing debate, research
into the uses and implications of
this technology continues to prog-
ress. One of our goals in compos-
ing this article is to raise aware-
ness of past, present, and future
socialbot applications and enable
a broader spectrum of interested
participants and observers to
address these issues directly and
transparently.
A Socialbots
Competition
@tinypirate and @AeroFade
In February 2011, the Web Ecology
Project organized a competition
to explore how socialbots could
influence changes in the social
graph of a subnetwork on Twitter.
The competition, Socialbots,
tasked each of three teams with
building software robots that