letters to the editor
Teach the Law (and the AI) ‘Foreseeability’
out of computing. I found this claim too
harsh and also unjust. To me, it involves
intention. And Vardi’s argumentation
looks to me more political, or politically
correct, than scientific. Of course we
need more women in computing, and
yes, one can be biased without recognizing one’s own bias. Still, my personal
experience, on hiring committees at the
University of Michigan and at Microsoft,
is that computer scientists try very hard
to bring women on board.
There are interesting parallels between
U.S. and Soviet political correctness. In the
late 1960s, I was the chair of the mathematics department in Sverdlovsk Institute
for National Economy—a Soviet university—responsible for the entrance exams
in mathematics. The rector of the university pressed for increasing the percentage
of accepted students from the working
classes, as opposed to the intelligentsia.
In principle I liked the idea. My parents
were laborers. The question was how to
achieve the goal. I suggested a division
of labor: We, the mathematicians, would
grade the exams on merit, as usual, and
the administration would accept whomever it deemed appropriate. I also suggested that we offer remedial courses for
working-class high-school students to prepare them for the rigors of university-level
mathematics. But the rector would have
none of it. He wanted us to grade on merit
and somehow simultaneously increase
the percentage of working-class students.
The pressure came from above. Higher authorities wanted to increase the percentage of working-class students. But even in
the USSR, nobody accused us of pushing
out the group of people in question.
The issue of this letter is bigger
than gender equality or the Soviet experience. It is about political correctness. Responding to Toxen, Vardi
wrote: “Communications is definitely
not only about computers and programming.” I still like Toxen’s idea of
taking Communications out of politics.
But if we have to debate a political issue, it should be done constructively,
without exaggerating or imputing intentions that people may not have.
Yuri Gurevich, Ann Arbor, MI, USA
RYAN CALO’S “LAW and Tech- nology” Viewpoint “Is the Law Ready for Driverless Cars?” (May 2018) explored the implications, as Calo
said, of “…genuinely unforeseeable
categories of harm” in potential liabili-
ty cases where death or injury is caused
by a driverless car. He argued that com-
mon law would take care of most other
legal issues involving artificial intel-
ligence in driverless cars, apart from
Calo also said the courts have
worked out problems like AI before and
seemed confident that AI foreseeabil-
ity will eventually be accommodated.
One can agree with this overall judg-
ment but question the time horizon. AI
may be quite different from anything
the courts have seen or judged before
for many reasons, as the technology is
indeed designed to someday make its
own decisions. After the fact, it may be
impossible to ascertain the reasons for
or logic behind its decisions.
AI is a sort of idiot savant that can be
unpredictably, and potentially, dangerously literal. Calo gave an example of a
driverless car instructed to maximize
efficiency and decide that having a fully
charged battery would be the best way to
achieve it. The car kept its engine running in the garage of a house overnight
and, in doing so, asphyxiated its human
occupants. This is an example of the so-called “paper-clip problem,” whereby
an AI is programmed with its sole objective to make paper clips. When it runs
out of metal wire, it begins to make
them out of anything else it can find.
Recall how the HAL 9000 computer in
Stanley Kubrick’s and Arthur C. Clarke’s
2001: A Space Odyssey let nothing interfere with its mission objective, including, tragically, its human astronauts.
AI software designers are still so new
at developing AI it will be difficult for
them to predict what could happen as it
is deployed in the real world. Manufac-
turers and designers using AI compete
in an environment where market share
and profitability almost always drive
product development and release, more
than any study of potential outcomes.
MIT physics professor Max Tegmark
has insightfully explored such “bugs” in
the application of current technology. 1
As liability cases are litigated, courts
in different jurisdictions, following a
similar set of facts and circumstances,
may produce very different judgments.
If the manufacturer claims particular
AI software is proprietary, determining
what led the software to make a particular decision might be futile.
AI is a field of information technology
the average person, including owners of
AI-equipped cars and members of a jury,
can barely grasp, much less evaluate. Further study of foreseeability could only
benefit the technology, as well as the law.
1. Tegmark, M. The near future: Breakthroughs, bugs,
laws, weapons, and jobs. Chapter 3 in Life 3.0: Being
Human in the Age of Artificial Intelligence. Alfred A.
Knopf, New York, 2017, 93–110.
Evelyn McDonald, Fernandina Beach,
I appreciate this thoughtful response. The
paper-clip problem has always fascinated
me when offered as evidence of the supposed
existential threat AI poses to humanity. The
problem envisions a system so limited that it
blindly follows a single objective function—
making paper clips—but is simultaneously
so powerful, intelligent, and versatile that
it overcomes the sum of human resistance.
Regardless, I completely agree with
McDonald’s central takeaway that we cannot
know how AI will be deployed in practice in
the years to come.
Ryan Calo, Seattle, WA, USA
Political Correctness, Here, Too
I sympathize with Bob Toxen’s position,
as outlined in his letter to the editor “Get
ACM (and Communications) Out of Politics” (May 2018). Meanwhile, writing as
if to lend additional support to Toxen’s
critique, Moshe Y. Vardi wrote in his
Vardi’s Insight column “How We Lost
the Women in Computing” (also in May
2018) that women were being pushed