In 2013, Christian Szegedy and colleagues working at Google Brain found
subtle pixel-level changes, imperceptible to a human, that extended across the
image would lead to a bright yellow U.S.
school bus being classified by a deep
neural network (DNN) as an ostrich.
Two years later, Anh Nguyen, then
a Ph.D. student at the University of
AT THE STAR Tofthedecade, deep learning restored the reputation of artifi- cial intelligence (AI) fol- lowing years stuck in a
technological winter. Within a few
years of becoming computationally
feasible, systems trained on thou-
sands of labeled examples began to
exceed the performance of humans
on specific tasks. One was able to
decode road signs that had been ren-
dered almost completely unreadable
by the bleaching action of the sun,
It just as quickly became apparent,
however, that the same systems could
just as easily be misled.
AI attacks throw light on the nature of deep learning.
Science | DOI: 10.1145/3365573 Chris Edwards
High-resolution images of fake “celebrities” generated by a Generative Adversarial Network using the CelebA-HQ training dataset.