might make better caregivers and companions for the elderly. However, there
are many more reasons we don’t want
computers to be intentionally or unintentionally fooling us. Hollywood provides lots of examples of the dangers
awaiting us here. Such a law would,
of course, cause problems in running
any sort of Turing Test. However, I expect that the current discussion about
replacements for the Turing Test will
eventually move from tests for AI based
on deception to tests that quantify explicit skills and intelligence. Some related legislation has been put into law
for guns. In particular, former California Governor Schwarzenegger signed
legislation in September 2004 that
prohibits the public display of toy guns
in California unless they are clear or
painted a bright color to differentiate
them from real firearms. The purpose
of this law is to prevent police officers
mistaking toy guns for real ones.
The second part of the law states
that autonomous systems need to
identify themselves at the start of any
interaction with another agent. Note
that this other agent might even be
another AI. This is intentional. If you
send your AI bot out to negotiate the
purchase of a new car, you want the bot
also to know whether it is dealing with
a dealer bot or a person. You wouldn’t
want the dealer bot to be able to pretend to be a human just because it was
interacting with your bot. The second
part of the law is designed to reduce
the chance that autonomous systems
are accidently mistaken for what they
are not.
Consider four up-and-coming areas
where this law might have bite. First,
As any lover of Shakespeare knows,
there are many dangers awaiting us
when we try to disguise our identity.
What happens if the AI impersonates
someone we trust? Perhaps they will
be able to trick us to do their bidding.
What if we suppose they have human-
level capabilities but they can only act
at a sub-human level? Accidents might
quickly follow. What happens if we de-
velop a social attachment to the AI? Or
worse still, what if we fall in love with
them? There is a minefield of problems
awaiting us here.
This is not the first time in history
that a technology has come along that
might disrupt and endanger our lives.
Concerned about the impact of motor vehicles on public safety, the U.K.
parliament passed the Locomotive
Act in 1865. This required a person to
walk in front of any motorized vehicle
with a red flag to signal the oncoming
danger. Of course, public safety wasn’t
the only motivation for this law as the
railways profited from restricting motor vehicles in this way. Indeed, the
law clearly restricted the use of motor
vehicles to a greater extent than safety alone required. And this was a bad
thing. Nevertheless, the sentiment was
a good one: until society had adjusted
to the arrival of a new technology, the
public had a right to be forewarned of
potential dangers.
Interestingly, this red flag law was
withdrawn three decades later in 1896
when the speed limit was raised to
14mph (approximately 23kph). Coincidently, the first speeding offense, as
well as the first British motoring fatality, the unlucky pedestrian Bridget
Driscoll also occurred in that same year.
And road accidents have quickly escalated from then on. By 1926, the first
year for which records are available,
there were 134,000 cases of serious injury, yet there were only 1,715,421 vehicles on the roads of Great Britain. That
is one serious injury each year for every
13 vehicles on the road. And a century
later, thousands still die on our roads
every year.
Inspired by such historical prec-
edents, I propose that a law be enacted
to prevent AI systems from being mis-
taken for humans. In recognition of
Alan Turing’s seminal contributions to
this area, I am calling this the Turing
Red Flag law.
Turing Red Flag law: An autonomous
system should be designed so that it is
unlikely to be mistaken for anything
besides an autonomous system, and
should identify itself at the start of any
interaction with another agent.
Let me be clear. This is not the law
itself but a summary of its intent. Any
law will have to be much longer and
much more precise in its scope. Legal
experts as well as technologists will be
needed to draft such a law. The actual
wording will need to be carefully craft-
ed, and the terms properly defined.
It will, for instance, require a precise
definition of autonomous system. For
now, we will consider any system that
has some sort of freedom to act inde-
pendently. Think, for instance, self-
driving car. Though such a car does not
choose its end destination, it never-
theless does independently decide on
the actual way to reach that given end
destination. I would also expect that,
as is often the case in such matters, the
exact definitions will be left to the last
moment to leave bargaining room to
get any law into force.
There are two parts to this proposed
law. The first part of the law states that
an autonomous system should not be
designed to act in a way that it is likely
to be mistaken there is a human in the
loop. Of course, it is not impossible
to think of some situations where it
might be beneficial for an autonomous
system to be mistaken for something
other than an autonomous system.
An AI system pretending to be human
might, for example, create more engag-
ing interactive fiction. More controver-
sially, robots pretending to be human
There are many
reasons we don’t
want computers
to be intentionally
or unintentionally
fooling us.