tives and of new technologies, both of
which there is no shortage of, and therefore it is likely to plague our society and
our systems for the foreseeable future.
It is therefore the duty of the computing community to enact policies
and research programs to keep fighting against the proliferation of current
and new forms of spam. I conclude
suggesting three maxims that may
guide future efforts in this endeavor:
1. Design technology with abuse in
mind. Evidence seems to suggest that,
in the computing world, new powerful technologies are oftentimes
abused beyond their original scope.
Most modern-days technologies, like
the Internet, the Web, email, and social media, have not been designed
with built-in protection against attacks or spam. However, we cannot
perpetuate a naive view of the world
that ignores ill-intentioned attackers:
new systems and technologies shall
be designed from their inception with
abuse in mind.
2. Don’t forget the arms race. The
fight against spam is a constant arms
race between attackers and defenders,
and as in most adversarial settings, the
party with the highest stakes will prevail: since with each new technology
comes abuse, researchers shall anticipate the need for countermeasures to
avoid being caught unprepared when
spammers will abuse their newly designed technologies.
3. Blockchain technologies. The abil-
ity to carry out massive spam attacks
in most systems exists predominantly
due to the lack of authentication mea-
sures that reliably guarantee the iden-
tity of entities and the legitimacy of
transactions on the system. The block-
chain as a proof-of-work mechanism
to authenticate digital personas (in-
cluding in virtual realities), AIs, and
others may prevent several forms of
spam and mitigate the scale and im-
pact of others.h
Spam is here to stay: let’s fight it
The author would like to thank current
and former members of the USC Infor-
mation Sciences Institute’s MINDS re-
search group, as well as of the Indiana
University’s CNetS group, for invaluable
research collaborations and discus-
sions on the topics of this work. The au-
thor is grateful to his research sponsors
including the Air Force Office of Scien-
tific Research (AFOSR), award FA9550-
17-1-0327, and the Defense Advanced
Research Projects Agency (DARPA),
mentioned in this article include:
WhatsApp, Facebook Messenger, WeChat, Gmail, Microsoft Outlook, Hotmail, Cisco IronPort, Email Security Appliance (ESA), AOL Instant Messenger,
Reddit, Twitter, and Google Duplex.
h It is worth noting that proof-of-work has been
proposed to prevent spam email in the past,
however its feasibility remains debated, especially in its original non-blockchain-based
1. Adler, B., Alfaro, L.D. and Pye, I. Detecting Wikipedia
vandalism using wikitrust. Notebook papers of CLEF 1
2. Allem, J. P., Ferrara, E., Uppu, S. P., Cruz, T. B. and Unger,
J.B. E-cigarette surveillance with social media data:
social bots, emerging topics, and trends. JMIR Public
Health and Surveillance 3, 4 (2017).
3. Almeida, T. A., Hidalgo, J. M. G. and Yamakami, A.
Contributions to the study of SMS spam filtering: new
collection and results. In Proceedings of the 11th ACM
Symposium on Document Engineering. ACM, 2011,
4. Androutsopoulos, I., Koutsias, J., Chandrinos, K.V. and
Spyropoulos, C.D. An experimental comparison of
naive Bayesian and keyword-based anti-spam filtering
with personal e-mail messages. In Proceedings of the
ACM SIGIR Conference on Research and
Development in Information Retrieval. ACM, 2000,
5. Baeza-Yates, R. Bias on the Web. Commun. ACM 61, 6
(June 2018), 54–61.
6. Bessi, A. and Ferrara, E. Social bots distort the 2016
US Presidential election online discussion. First
Monday 21, 11 (2016).
7. Caruana, G. and Li, M. A survey of emerging
approaches to spam filtering. ACM Computing Surveys
44, 2 (2012), 9.
8. Chesney, R. and Citron, D. Deep Fakes: A Looming
Crisis for National Security, Democracy and Privacy.
The Lawfare Blog (2018).
9. Chhabra, S., Aggar wal, A., Benevenuto, F. and
Kumaraguru, P. Phi.sh/$ocial: The phishing landscape
through short URLs. In Proceedings of the 8th Annual
Collaboration, Electronic messaging, Anti-Abuse and
Spam Conference. ACM, 2011, 92–101.
10. Cranor, L. F. and LaMacchia, B. A. Spam! Commun. ACM
41, 8 (Aug. 1998), 74–83.
11. Crawford, M., Khoshgoftaar, T. M., Prusa, J. D., Richter,
A.D. and Najada, H.A. Survey of review spam detection
using machine-learning techniques. J. Big Data 2, 1
12. De Meo, P., Ferrara, E., Fiumara, G. and Provetti, A. On
Facebook, most ties are weak. Commun. ACM 57, 11
(Nov. 2014), 78–84.
13. Drucker, H., Wu, D. and Vapnik, V.N. Support vector
machines for spam categorization. IEEE Trans Neural
Networks 10 (1999).
14. Eykholt, K. et al. D. Robust physical-world attacks on
deep learning visual classification. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, 2018, 1625–1634.
15. Ferrara, E. Manipulation and abuse on social media.
ACM SIG WEB Newsletter Spring (2015), 4.
16. Ferrara, E., Varol, O., Davis, C., Menczer, F. and
Flammini, A. The rise of social bots. Commun. ACM
59, 7 (July 2016), 96–104.
17. Fumera, G., Pillai, I. and Roli, F. Spam filtering based
on the analysis of text information embedded into
images. J. Machine Learning Research 7, (Dec. 2006),
18. Gao, H., Hu, J., Wilson, C., Li, Z., Chen, Y. and Zhao, B. Y.
Detecting and characterizing social spam campaigns.
In Proceedings of the 10th ACM SIGCOMM Conference
on Internet Measurement. ACM, 2010, 35–47.
19. Ghosh, S. et al. Understanding and combating link
farming in the Twitter social network. In Proceedings
of the 21st International Conference on World Wide
Web. ACM, 2012, 61–70.
20. Goodman, J., Cormack, G.V. and Heckerman, D. Spam
and the ongoing battle for the inbox. Commun. ACM
50, 2 (Feb. 2007), 24–33.
21. Gupta, B.B., Tewari, A., Jain, A.K. and Agrawal, D.P.
Fighting against phishing attacks: state of the art and
future challenges. Neural Computing and Applications
28, 12 (2017), 3629–3654.
22. Hendler, J., Shadbolt, N., Hall, W., Berners-Lee, T. and
Weitzner, D. Web science: An interdisciplinary
approach to understanding the Web. Commun. ACM
51, 7 (July 2008), 60–69.
23. Jagatic, T. N. Johnson, N.A. Jakobsson, M. and
Menczer, F. Social phishing. Commun. ACM 50, 10
(Oct. 2007), 94–100.
24. Jindal, N. and Liu, B. Opinion spam and analysis. In
Proceedings of the 2008 International Conference on
Web Search and Data Mining. ACM, 219–230.
25. Kim, H. et al. Deep Video Portraits. arXiv preprint
26. Laurie, B. and Clayton, R. Proof-of-work proves not to
work; version 0.2. In Workshop on Economics and
Information, Security, 2004.
27. Liu B. Sentiment analysis and opinion mining.
Synthesis Lectures on Human Language Technologies
5, 1 (2012), 1–167.
28. Liu, Y. Gummadi, K.P., Krishnamurthy, B. and Mislove,
A. Analyzing Facebook privacy settings: User
expectations vs. reality. In Proceedings of the 2011
ACM SIGCOMM Conference on Internet Measurement
Conference. ACM, 61–70.
29. Mukherjee, A. et al. Spotting opinion spammers using
behavioral footprints. In Proceedings of the 19th ACM
SIGKDD International Conference on Knowledge
Discovery and Data Mining. ACM, 2013, 632–640.
30. Mukherjee, A., Liu, B. and Glance, N. Spotting fake
reviewer groups in consumer reviews. In Proceedings
of the 21st International Conference on World Wide
Web. ACM, 2012, 191–200.
31. Spirin, N. and Han, J. 2012. Survey on Web spam
detection: Principles and algorithms. ACM SIGKDD
Explorations Newsletter 13, 2 (2012), 50–64.
32. Subrahmanian, V.S. et al. The DARPA Twitter Bot
Challenge. Computer 49, 6 (2016), 38–46.
33. Suwajanakorn, S., Seitz, S.M. and Kemelmacher-Shlizerman, I. Synthesizing Obama: Learning lip sync
from audio. ACM Trans Graphics (2017).
34. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C.,
and Nießner, M. Face2Face: Real-time face capture
and reenactment of RGB videos. In Proceedings of
Computer Vision and Pattern Recognition. IEEE, 2016.
35. Varol, O., Ferrara, E., Davis, C., Menczer, F. and
Flammini, A. Online human-bot interactions:
Detection, estimation, and characterization. In
Proceedings of International AAAI Conference on
Web and Social Media, 2017.
36. Vosoughi, S., Roy, D. and Aral, S. The spread of true
and false news online. Science 359, 6380 (2018),
37. Wu, C.H. Behavior-based spam detection using a
hybrid method of rule-based techniques and neural
networks. Expert Systems with Applications 36, 3
38. Wu, C. T., Cheng, K. T., Zhu, Q., and Wu, Y. L. Using visual
features for anti-spam filtering. In Proceedings of
IEEE International Conference on Image Processing
3. IEEE, 2005, III–509.
39. Xie, S., Wang, G., Lin, S. and Yu, P. S. Review spam
detection via temporal pattern discovery. In
Proceedings of the 18th ACM SIGKDD international
Conference on Knowledge Discovery and Data Mining.
ACM, 2012, 823–831.
40. Yang, Z., Wilson, C., Wang, X., Gao, T., Zhao, B. Y. and Dai,
Y. Uncovering social network Sybils in the wild. ACM
Trans. Knowledge Discovery from Data 8, 1 (2014), 2.
Emilio Ferrara ( email@example.com) is an assistant
research professor and associate director of Applied Data
Science at the University of Southern California
Information Sciences Institute, Marina Del Rey, CA, USA.
Copyright held by author/owner.
Publication rights licensed to ACM.