The cyber security community has
succeeded in substantially advancing
the field of resilient and trustworthy
systems. Furthermore, research and
development in security usability
have made better security available
to more users. The continued state of
poor security adoption and practice,
interacting with basic human nature,
requires us to consider the next step
of offering automated, behind-the-scenes cyber security as widely as possible. Continued work is necessary to
refine the balance of control between
human and machine, similar to the
conversations around machine learning and artificial intelligence. If anything, those fields will require good
cyber security to achieve their full
promise. We believe it is time to consider a new approach, as we have outlined in this Viewpoint.
1. ACM Committee on Professional Ethics. 2018 ACM
Code of Ethics and Professional Conduct: Draft 2;
2. Forget, A, Pearman, S., Thomas, J. et al. Do or do not,
there is no try: User engagement may not improve
security outcomes. In Proceedings of the Symposium
on Usable Privacy and Security (SOUPS). (USENIX
Association, Denver, CO), 2016, 97–111.
3. Frei, S., Duebendorfer, T. and Plattner, B. Firefox (in)
security update dynamics exposed. ACM SIGCOMM
Comput. Commun. Rev. 39, 1 (Jan. 2009), 16–22.
4. Millie, A. and Herrington, V. Bridging the gap:
Understanding reassurance policing. The Howard
Journal 44, 1 (Feb. 2005), 41–56.
5. Nachenberg, C. The Florentine Deception. Open
Road Media Mystery & Thriller, 2015. http://
6. Redmiles, E., Malone, A. and Mazurek, M. I think
they’re trying to tell me something: Advice sources
and selection for digital security. IEEE Symposium on
Security and Privacy, 2016.
7. Sasse, M. A., Smith, M., Herley, C., Lipford, H., and
Vaniea, K. Debunking security-usability tradeoff myths.
IEEE Security & Privacy 14, 5 (May 2016), 33–39.
8. Schneier, B. The psychology of security. In
Proceedings of the Cryptology in Africa 1st
International Conference on Progress in Cryptology
(AFRICACRYPT’08), Serge Vaudenay, Ed. Springer-Verlag, Berlin, Heidelberg, 2008, 50–79.
9. Wash, R., Rader E., Vaniea K. et al. Out of the loop:
How automated software updates cause unintended
security consequences. In Proceedings of the
Symposium on Usable Privacy and Security (SOUPS).
USENIX Association, Berkeley, CA, 2014, 89–104.
10. West, R. The psychology of security: Why do good
users make bad decisions? Commun. ACM 51, 4 (Apr.
Josiah Dykstra ( firstname.lastname@example.org) is a cyber
security researcher with the U.S. Department of Defense
in Baltimore, MD, USA.
Eugene H. Spafford ( email@example.com) is a professor of
computer science at Purdue University, West LaFayette,
The views and opinions expressed in this Viewpoint are
those of the authors and do not necessarily reflect those
of the U.S. government, the U.S. Department of Defense,
or Purdue University.
Copyright held by author.
some of these users value theoretical
considerations of control over practical security. Thus, it may be necessary
to allow more control for those users
with greater experience or special
needs. The default start for these users would be the “invisible” security,
with non-obvious options requiring
explicit acknowledgment of risk, and
perhaps a certain level of technical
skill to access. This would be the cyber equivalent of “No user serviceable parts inside” warnings on many
We acknowledge there are risks
with invisible security that must be
considered. Automated updates could
interfere or break other software, or
worse.a We will need mechanisms to
verify that the invisible security is
enabled and working as it should.
We also recognize there are circumstances where patches must be certified in some way—including having
patched systems meet performance
and safety standards, such as those
present for industrial controls, medical uses, and national security. In
these cases, exceptions may need to
be made to delay patching and support necessary testing. (This begs
the questions of why those critical
applications are using commodity
software that may be prone to serious
errors, and why they are configured
in such a way that their safe operation would necessitate such patches.)
Generally, these special cases make
up a minority of deployed systems,
and exempting them from automated
patching would not negate the benefits of quickly fixing problems in all
the rest. We also note the serious issue
of securing legacy, unsupported, and
unlicensed systems will remain a challenge, but it is made neither better nor
worse by behind-the-scenes security.
Research will be necessary to determine whether or not users feel more
secure—and if they actually are more
secure—when cyber security defenses
are invisible. That requires understanding what is meant by “security”
in different contexts and with different types of security controls (such as
patching, anti-malware, anti-phish-ing). It has been repeatedly noted that
a This topic is explored in a recent work of fiction by a senior security professional. 5
adequate security is relative to current environments and threats. For
many end users, good security simply means their privacy is protected,
even if nothing else is. Thus, security
is often seen as both a reality and a
feeling. Bruce Schneier, in particular, has criticized “security theater”
but he acknowledges that “a bit of
well-placed security theater might be
exactly what we need to both be and
feel more secure.” 8 Cyber security
professionals can learn from examples in other domains of visible versus non-visible security implementations. For instance, visible policing
is an approach to security that places
uniformed police officers in public
to deter crime and reassure citizens.
Research shows mixed results, including cases of increased crime and fear
of crime after increasingly visible police presence. 4
Clearly, those of us involved with
equipping the world with advanced
computation have some ethical obligations to make that computation
safe. 1 The security community should
strive for default security without explicit user interaction. The challenge
is one of balance: How do we continue
to provide appropriate autonomy and
freedom to computer users while also
protecting them? What is an appropriate level of residual risk to allow? We
believe these (and related) questions
should be considered and discussed,
now, to enable development of a new
climate for cyber security (and thus,
privacy protection, which usually depends on good security) rather than
continue to apply patchwork protections as marketing opportunities. We
suggest that part of the solution is to
move away from solutions that default
to settings for the users who need the
most options and choices, and instead
automate security as the new default.
for end users
is an enemy
of good security.