fooled by computers. Several years ago
a friend asked me how the self-service
checkout could recognize different
fruit and vegetables. I hypothesized
a classification algorithm, based on
color and shape. But then my friend
pointed out the CCTV display behind
me with a human operator doing the
classification. The boundary between
machine and man is quickly blurring.
Even experts in the field can be mistaken. A Turing Red Flag law will help
keep this boundary sharp. Third, humans are often quick to assign computers with more capabilities than they
actually possess. The last example illustrates this. As another example, I let
some students play with an Aibo robot
dog, and they quickly started to ascribe
the Aibo with emotions and feelings,
neither of which the Aibo has. Autonomous systems will be fooling us as human long before they actually are capable to act like humans. Fourth, one of
the most dangerous times for any new
technological is when the technology is
first being adopted, and society has not
yet adjusted to it. It may well be, as with
motor cars today, society decides to repeal any Turing Red Flag laws once AI
systems become the norm. But while
they are rare, we might well choose to
act a little more cautiously.
In many U.S. states, as well as many
countries of the world including Australia, Canada, and Germany, you must
be informed if your telephone conversation is about to be recorded. Perhaps
in the future it will be routine to hear,
“You are about to interact with an AI
bot. If you do not wish to do so, please
press 1 and a real person will come on
the line shortly.”
References
1. Colford, P. A leap forward in quarterly earnings
stories. Associated Press blog announcement, 2014;
https://blog.ap.org/announcements/a-leap-forward-in-quarterly-earnings-stories.
2. Turing, A. Computing machinery and intelligence.
MIND: A Quarterly Review of Psychology and
Philosophy 59, 236 (1950), 433–460.
3. Wallace, R., Melton, H., and Schlesinger, R. Spycraft:
The Secret History of the CIA’s Spytechs, from
Communism to al-Qaeda. Dutton Adult, 2008.
4. Weizenbaum, J. Eliza—A computer program for the
study of natural language communication between man
and machine. Commun. ACM 9, 1 (Jan. 1966), 36–45.
Toby Walsh ( toby.walsh@nicta.com.au) is Professor of
Artificial Intelligence at the University of New South
Wales, and Data61, Sydney, Australia. He was recently
elected a Fellow of the Australian Academy of Sciences.
Copyright held by author.
therapeutic tool to help such patients.
Again, some people find it troubling
that a robot seal can be mistaken for
real. Imagine then how much more
troubling society is going to find it
when such patients mistake AI systems
for humans?
Let’s move onto a third example,
online poker. This is a multibillion-dollar industry so it is possible to say
that the stakes are high. Most, if not
all, online poker sites already ban
computer bots from playing. Bots have
a number of advantages, certainly over
weaker players. They never tire. They
can compute odds very accurately.
They can track historical play very accurately. Of course, in the current state
of the art, they also have disadvantages
such as understanding the psychology of their opponents. Nevertheless,
in the interest of fairness, I suspect
most human poker players would prefer to know if any of their opponents
was not human. A similar argument
could be made for other online computer games. You might want to know
if you’re being “killed” easily because
your opponent is a computer bot with
lightning-fast reflexes.
I conclude with a fourth example:
computer-generated text. Associated
Press now generates most of its U.S.
corporate earnings reports using a
computer program developed by Automated Insights. 1 A narrow interpretation might rule such computer-generated text outside the scope of a Turing
Red Flag law. Text-generation algorithms are typically not autonomous.
Indeed, they are typically not interac-
tive. However, if we consider a longer
time scale, then such algorithms are
interacting in some way with the real
world, and they may well be mistaken
for human-generated text. Personally,
I would prefer to know whether I was
reading text written by human or com-
puter—it is likely to impact my emo-
tional engagement with the text. But I
fully accept that we are now in a grey
area. You might be happy for automati-
cally generated tables of stock prices
and weather maps to be unidentified
as computer generated, but perhaps
you do want match reports to be identi-
fied as such? What if the commentary
on the TV show covering the World
Cup Final is not Messi, one of the best
footballers ever, but a computer that
just happens to sound like Messi? And
should you be informed if the beauti-
ful piano music being played on the
radio is composed by Chopin or by a
computer in the style of Chopin? These
examples illustrate that we still have
some way to go working out where to
draw the line with any Turing Red Flag
law. But I would argue, there is a line to
be drawn somewhere here.
There are several arguments that
can be raised against a Turing Red Flag
law. One argument is that it’s way too
early to be worrying about this problem
now. Indeed, by flagging this problem
today, we’re just adding to the hype
around AI systems breaking bad. There
are several reasons why I discount this
argument. First, autonomous vehicles
are likely only a few years away. In June
2011, Nevada’s Governor signed into
law AB 511, the first legislation anywhere in the world that explicitly permits autonomous vehicles. As I mentioned earlier, I find it surprising that
the bill says nothing about the need
for autonomous vehicles to identify
themselves. In Germany, autonomous
vehicles are currently prohibited based
on the 1968 Vienna Convention on
Road Traffic to which Germany and 72
other countries follow. However, the
German transport minister formed a
committee in February 2015 to draw up
the legal framework that would make
autonomous vehicles permissible on
German roads. This committee has
been asked to present a draft of the key
points in such a framework before the
Frankfurt car fair in September 2015.
We may therefore already be running
late to ensure autonomous vehicles
identify themselves on German roads.
Second, many of us have already been
I suspect most
human poker players
would prefer
to know if any
of their opponents
was not human.