Papernot says. The idea behind using
DkNN to detect adversarial examples is
to ensure the model makes a prediction
only when it has enough training data to
call upon to be able to generate a high-enough credibility score; otherwise, it
will say it does not know, and a system
relying on that DNN would either
need to seek a second opinion, or to
try to obtain more data.
Having developed attacks on
DkNN together with his supervisor
David Wagner, a professor of computer science at the University of
California, Berkeley, Ph.D. student
Chawin Sitawarin says an issue with
the current approach is that it tends
to suffer from false positives: correct
classifications that have unusually
low credibility scores. Sitawarin says
improvements to the way the score is
calculated could increase reliability,
and that DkNN-like techniques represent a promising direction for detecting adversarial examples.
As work continues on multiple fronts,
it seems likely that defense against
these attacks will go hand-in-hand
with greater understanding of how and
why DNNs learn what they do.
Further Reading
Ilyas, A., Santurkar, S., Tsipras, D.,
Engstrom, L., Tran, B., and Madry, A.
Adversarial Examples Are Not Bugs,
They Are Features
ArXiv preprint (2019): https://arxiv.org/
abs/1905.02175
Wang, H., Wu, X., Yin, P., and Xing, E.P.
High Frequency Component
Helps Explain the Generalization
of Convolutional Neural Networks
ArXiv preprint (2019):
https://arxiv.org/abs/1905.13545
Papernot, N., and McDaniel P.
Deep k-Nearest Neighbors:
Towards Confident, Interpretable
and Robust Deep Learning
ArXiv preprint (2018):
https://arxiv.org/abs/1803.04765
Jacobsen, J.H., Behrmannn, J., Carlini N.,
Tramer, F., and Papernot, N.
Exploiting Excessive Invariance caused
by Norm-Bounded Adversarial Robustness
ICLR 2019 Workshop on Safe ML,
New Orleans, Louisiana.
https://arxiv.org/abs/1903.10484
Chris Edwards is a Surrey, U.K.-based writer who reports
on electronics, IT, and synthetic biology.
© 2019 ACM 0001-0782/19/12 $15.00
learning models as they exist today
that may call for architectural en-
hancements. Deniz says the presence
of adversarial examples is a side-
effect of a long-standing trade-off be-
tween accuracy and generalization:
“From my point of view, the problem
is not in the data, but in the current
forms of machine learning.”
A different approach to combat-
ing adversarial examples that does
not rely on changes to the learned
models themselves is to find ways to
determine whether a machine learn-
ing model has not gone further than
its training should allow. A major
problem with DNNs in the way they
are constructed today is that they are
overly confident in the decisions they
make, whether rightly or wrongly. The
chief culprit is the “softmax” layer
used by most DNN implementations
to determine the probability of the im-
age being in any of the categories on
which it was trained.
Nicolas Papernot, a research sci-
entist at Google Brain, explains, “The
softmax layer is a great tool for train-
ing the model because it creates a
nice optimization landscape, but it
is not a suitable model for making
predictions. A softmax layer does not
allow the model to refuse to make a
prediction. It is not surprising, then,
that once presented with an input
that it should not classify, a neural
network equipped with a softmax
outputs an incorrect prediction.”
Originally developed by Papernot
while he was a Ph.D. student together
with Patrick McDaniel, professor of
information and communications
science at Pennsylvania State Univer-
sity, the Deep k-Nearest Neighbors
(DkNN) technique performs a layer-
by-layer analysis of the decisions
made by the machine learning model
during classification to construct a
“credibility score.” Adversarial ex-
amples tend to lead to results that
are not consistent with a single class,
but with multiple different classes. It
is only toward the end of the process
that the softmax layer pushes up the
probability of an incorrect result to a
high-enough level to push the result
off-target.
“The DkNN addresses the uncertainty that stems from learning from
limited data, which is inevitable,”
ACM
Member
News
LOOKING AT WAYS TO
SPEED UP THE INTERNET
Bruce Maggs,
Pelham Wilder
Professor of
Computer
Science at
Duke
University, first
became interested in
computers in the mid-1970s.
His father, a computer
hobbyist, had installed a home
computer system that had
access to PLATO, a multiuser
computing platform
developed at the University of
Illinois in the 1960s.
This early interest
eventually led Maggs to earn
his undergraduate, master’s,
and Ph. D. degrees in computer
science from the Massachusetts
Institute of Technology. After
graduating, he spent time
working for NEC Research
Institute in Princeton, NJ, before
joining the faculty of Carnegie
Mellon University in 1994. He
moved to Duke University in
2010.
In 1998, Maggs helped
launch content delivery network
Akamai Technologies, and
served as its first vice president
of research and development.
He retains a part-time position
at Akamai as vice president of
research.
While his research focuses
on distributed systems,
including content delivery
networks, computer networks,
and computer and network
security, lately Maggs has been
concentrating on networking at
the speed of light.
“I think the Internet is
still too slow,” Maggs says. He
explains he has been working
on reducing latency to the point
where a packet can be sent from
point A to point B at the speed of
light. In the future, Maggs hopes
there will be a much broader
class of applications (such as
games, e-commerce deliveries,
and virtual reality) able to take
advantage of lower-latency
networks.
“We will need the
infrastructure and protocols
in place so we can enjoy these
latency-sensitive applications
more than we can today.”
—John Delaney