not real,” says Siwei Lyu, professor of
computer science and director of the
Computer Vision and Machine Learning Lab (CVML) of the University at
Albany, which is part of the State University of New York. As a result, researchers in digital media forensics,
computer scientists, and others are
now examining ways to better identify fake videos, authenticate content,
and build frameworks to help thwart
the rapid spread of deepfakes on social media.
“It’s a problem that isn’t going to go
away,” Lyu says.
Deepfake technology bubbled to general public awareness in early 2018, when
former U.S. president Barack Obama
spoke out about the growing dangers
of false news and videos. “We are entering an era when our enemies can make
it look like anyone is saying anything at
any point in time,” he stated in a video
clip. Except it was not Obama actually
making the video appearance; it was a
deepfake created by comedian Jordan
IT HAS BEEN said that the cam- era doesn’t lie. However, in the digital age, it is also becoming abundantly clear that it doesn’t necessarily depict the truth.
Increasingly sophisticated machine
learning combined with inexpensive
and easy-to-use video editing software
are allowing more and more people
to generate so-called deepfake videos.
These clips, which feature fabricated
footage of people and things, are a
growing concern in both politics and
“It’s a technology that is easily wea-ponized,” observes Hany Farid, a professor at the University of California,
Not only can deepfakes be used to
depict a political candidate or celebrity
saying or doing something he or she
never said or did, they can depict false
news events in an attempt to sway pub-
lic opinion. And then there are the dis-
turbing issues of blackmail and porn,
including revenge porn. A number of
deepfake videos have surfaced show-
ing a person’s ex-partner nude or en-
gaged in sex acts he or she did not com-
mit. The person creating the deepfake
video simply transposes the victim’s
face onto the body of another person,
such as a porn star.
“Today’s deep neural nets and AI
algorithms are becoming better and
better at creating images and video
of people that are convincing but
Do Deep Damage?
The ability to produce fake videos that appear amazingly real is here.
Researchers are now developing ways to detect and prevent them.
Society | DOI: 10.1145/3371409 Samuel Greengard
and AI algorithms
better and better
at creating images
and video of people
that are convincing
and not real.”
Paul Scharre of the Center for a New American Security views a deepfake video made by BuzzFeed, which changed what was said by former
U.S. president Barack Obama to what is spoken by filmmaker Jordan Peele (right on screen).