We do not purport to write the last
word on social bot ethics and culpability. Ethics is simply too complex of a
domain to deal with fully in such a format. Nevertheless, some readily accessible guidance rooted in sound ethical
thinking is in order.
For example, with the recent attention to the role of social bots in
spreading misinformation in the
form of “fake news,” other social
bots, such as Reuters News Tracer,
are being created to ferret out such
deceitful activity.v The Bot Ethics
procedure can help the social media
community understand when these
deceitful actions are indeed unethical. It further helps to expand the
focus of the community beyond narrow (that is, only deceitfulness) and
simplistic (that is, good or bad bot)
assessments of social bot activity to
attend to the complexities of ethical
assessments. In short, the Bot Ethics
procedure serves as a starting point
and guide for ethics-related discussion among various participants in
a social media community, as they
evaluate the actions of social bots.
1. Aristotle. Nicomachean Ethics of Aristotle. E.P.
Dutton, NY, 1911.
2. Ferrara, E. et al. The rise of social bots. Commun. ACM
59, 7 (July 2016); 96–104; DOI: 10.1145/2818717
3. Gotterbarn, D., Miller, K. and Rogerson, S. Computer
society and ACM approve software engineering code of
ethics. Computer Society Connection, (1999), 84–88.
4. Grisez, G. and Shawn, R. Beyond the New Morality: The
Responsibilities of Freedom. University of Notre Dame
Press, Notre Dame, IN, 1980.
5. Habermas, J. The Theory of Communicative Action,
Volume 1: Reason and the Rationalization of Society. 1985.
6. Kallman, E.A. and Grillo, J.P. Ethical Decision Making
and Information Technology. McGraw-Hill, New York,
N Y, 1996.
7. Mason, R. O., Mason, F.M., and Culnan, M. Ethics of
Information Management. Sage Publications,
8. Morstatter, F. et al. A new approach to bot detection:
Striking the balance between precision and recall.
9. Rawls, J. The justification of civil disobedience.
Arguing about Law (2013). 244–253.
Carolina Alves de Lima Salge ( email@example.com) is a
doctoral candidate at the University of Georgia.
Nicholas Berente ( firstname.lastname@example.org) is an associate
professor at the University of Georgia.
Copyright held by authors.
The social bot finds every tweet with
the term big data, replaces “big data”
with “Batman,” and then tweets the
message as if it were its own. It obviously substitutes its words for others’
words, but the satire makes it difficult
to judge its ethics. Because the social
bot might insult and embarrass some
big-data advocates the community
must go beyond the act (deontology) to
consider its consequences (teleology),
and ask whether potentially bad actions (for example, insult and embarrassment) outweigh, or supersede, the
good (for example, pleasure through
laughter) for the involved parties.
Again, is the deception justifiable? Deception in the absence of supersession
is likely to be unethical.
Violate Strong Norm?
Social bots that are legal and truthful
can still behave unethically by violating strong norms that create more evil
than good. Moral evils inflict “limits on
human beings and contracts human
life.” 4 Evil restrains, instead of emancipating, evil actions reduce opportunities. Let us go back to Tay’s racist comments on Twitter. Although not illegal
(First Amendment protections apply),
nor deceitful, they violated the strong
norm of racial equality. Social media
companies like Twitter that temporarily lock or permanently suspend accounts that “directly attack or threaten
other people on the basis of race,”t
have established that the moral evil
of racism outweighs the moral good
of free speech. By applying Bot Ethics
to Twitter’s norms we conclude that
Tay’s actions were unethical. Yet, there
are cases where social bots may violate
strong norms and not act unethically,
as with asking inappropriate questions
(what is your salary?). Such violations
do not create moral evils.
Culpability of Unethical
Social Bot Behavior
Should the general social media com-
munity blame developers for unethi-
cal behavior of their social bots? In
the example of the algorithm that
randomly generated that it wanted to
kill people, who is responsible for the
death threat? The programmer? Who
is responsible for Tay’s remark about
Hitler—Microsoft developers or those
teaching the social bot to generate
racist statements? Similarly, who is re-
sponsible for the social bot buying the
Aristotle1 said we can only assign culpability if we know that individuals behaved voluntarily and knowingly. Involuntary situations likely do not apply to
social bots. Developers who are coerced
into doing something unethical without a choice may not be entirely culpable, but in the case of free enterprise
there is always a choice. Therefore, culpability rests on the knowledge of the
developers. Developers who knowingly
create social bots to engage in unethical actions are clearly culpable. They
should be punished if evidence of their
wrongdoing is convincing—the penalty
must be consistent and proportional
to the harm done and those affected
should be compensated. 7
But what about situations where
developers act unknowingly? In those
occasions the community must determine whether developers are culpably
ignorant—did they ignore industry best
practices in creating and testing their
algorithms? If industry guidelines were
not followed and the action was unethical, developers are culpable. However,
developers who followed good development practices and incorporated the
current industry thinking, and yet their
social bot still acted unethically, deserve our pity and pardon, but they are
not culpable. They should apologize,
correct immediately, learn from their
experience, and communicate the occurrence to the development community. For example, Microsoft posted its
learning from Tay in blog form.u
Should the general
developers for the
unethical behavior of
their social bots?