suggesting that the behavior of software cannot be fully anticipated as a
matter of mathematics. But the more
we can do to understand autonomous
systems before deploying them in the
wild, the better.
Second, it is critical that researchers be permitted and even encouraged
to test deployed systems—without
fear of reprisal. Corporations and
regulators can and should support
research that throws curveballs to
autonomous technology to see how
it reacts. Perhaps the closest analogy is bug bounties in the security
context; at a minimum, terms of service agreements should clarify that
safety-critical research is welcome
and will not be met with litigation.
Finally, the present wave of intelligence was preceded by an equally
consequential wave of connectivity.
The ongoing connection firms now
maintain to intelligence products,
while in ways problematic, also offers an opportunity for better monitoring. 6 One day, perhaps, mechanical angels will sense an unexpected
opportunity but check with a human
before rushing in.
None of these interventions represent a panacea. The good news is
that we have time. The first generation of mainstream robotics, including fully autonomous vehicles, does
not present a genuinely difficult
puzzle for law in this law professor’s view. The next well may. In the
interim, I hope the law and technology community will be hard at work
grappling with the legal uncertainty
that technical uncertainty understandably begets.
1. Calo, R. Robotics and the lessons of cyberlaw.
California Law Review 513, 103 (2015).
2. Foster v. Preston Mill Co. 268 P.2d 645 (Wash. 1954).
3. Hill, K. Who do we blame when a robot threatens to kill
people? Spinter.com (Feb. 15, 2015); http://bit.ly/2FFKszl
4. Hope, B. and Ackerman, A. ‘Flash crash’ overhaul is
snarled in red tape. Wall Street Journal (May 5, 2015);
5. Price, R. Microsoft is deleting its AI chatbot’s
incredibly racist tweets. Business Insider (Mar. 24,
2016); http://read.bi/1ZwcF YZ
6. Walker Smith, B. Proximity-driven liability. Georgetown
Law Journal 1777, 102 (2014).
7. Vladeck, D.C. Machines without principles. Washington
Law Review 117, 89 (2014).
Ryan Calo ( email@example.com) is a Lane Powell and D. Wayne
Gittinger Endowed Professorship Associate Professor of
Law at the University of Washington in Seattle, WA, USA.
Copyright held by author.
Holocaust within hours of operation. 5
And who can forget the flash crash
of 2010, in which high-speed trading
algorithms destabilized the market,
precipitating a 10% drop in the Dow
Jones within minutes. 4
As increasing numbers of adaptive systems enter the physical
world, courts will have to reexamine the role foreseeability will play
as a fundamental arbiter of proximate causation and fairness. 1 That
is a big change, but the alternative
is to entertain the prospect of victims without perpetrators. It is one
thing to laugh uneasily at two Facebook chatbots that unexpectedly
invent a new language.a It is another to mourn the loss of a family to
carbon monoxide poisoning while
refusing to hold anyone accountable in civil court.
We lawyers and judges have our
work cut out for us. We may wind
up having to jettison a longstanding
and ubiquitous means of limiting
liability. But what role might there
be for system designers? I certainly
would not recommend stamping
out adaptation or emergence as a
research goal or system feature. Indeed, machines are increasingly useful precisely because they solve problems, spot patterns, or achieve goals
in novel ways no human imagined.
Nevertheless, I would offer a few
thoughts for your consideration.
First, it seems to me worthwhile to invest in tools that attempt to anticipate
robot behavior and mitigate harm.b
The University of Michigan has constructed a faux city to test driverless
cars. Short of this, virtual environments can be used to study robot interactions with complex inputs. I am,
of course, mindful of the literature
a Tim Collins and Mark Prigg, “Facebook
shuts down controversial chatbot experiment after AIs develop their own language to
talk to each other,” Daily Mail (Jul. 31, 2017);
http://dailym.ai/2vnk47J Also see “Did Facebook Shut Down an AI Experiment Because
Chatbots Developed Their Own Language?”
Snopes.com (Aug. 1, 2017); http://dailym.
ai/2vnk47J (concluding that Facebook did
not necessarily expect the behavior but nor
did it shut down the experiment as a result of it).
b For a prescient discussion, see Jeffrey Mogel,
“Emergent (Mis) Behavior vs. Complex Software Systems.” ACM SIGOPS 40, 4 (Oct. 2006),
Advertise with ACM!
Reach the innovators
and thought leaders
working at the
Request a media kit
+ 1 212-626-0686