letters to the editor
that human has an AI whispering in his
or her ear (or implanted in his or her
brain) telling them what to say or do?
More important, the most significant potential harms from bots or sophisticated AIs generally would not
be mitigated by just knowing when
we are dealing with an AI. The harm
Walsh proposed to address seemed
more aimed at the “creep factor” of
mistaking AIs for humans. We have
been learning to deal with that since
we first encountered a voicemail tree or
political robocall. Apart from suffering
less emotional shock, what advantage
might we gain from knowing we are not
dealing with a fellow human?
Learning to live with AIs will involve
plenty of consequential challenges. Will
they wipe us out? Should an AI that behaves exactly like a human—emotional
responses and all—have the legal rights
of a human? If AIs can do all the work
humans do, but better, how could we
change the economic system to provide
some of the benefits of abundance made
possible by AI-based automation to the
99% whose jobs might be eliminated?
Moreover, what will we humans do
with our time? How will we even justify
our existence to ourselves? These sci-fi
questions are quickly becoming real-life
questions, requiring we have more than
a red flag to address them.
Martin Smith, McLean, VA
Author Responds:
This critique introduces many wider and
orthogonal issues like existential risk and
technological unemployment. Yes, it will
be difficult to devise a law to cover every
situation. But that is true of most laws
and does not mean we should have no
law. However, actions speak loudest, and
the New South Wales parliament has just
recommended such a law in Australia; for
more, see http://tinyurl.com/redflaglaw.
Toby Walsh, Berlin, Germany
Communications welcomes your opinion. To submit a
Letter to the Editor, please limit yourself to 500 words
or less, and send to letters@cacm.acm.org.
© 2016 ACM 0001-0782/16/12 $15.00
I WAS EAGER to learn about the latest developments in the Se- mantic Web through the lens of a “new kind of semantics” as Abraham Bernstein et al. explored in their Viewpoint “A New Look
at the Semantic Web” (Sept. 2016), but
by the end I had the impression the
entire vision of a Semantic Web was
somehow at risk.
If I understand it correctly, semantics
is a mapping function that leads from
manifest expressions to elements in a
given arbitrary domain. Based on set theory, logicians have developed a framework to set up such mapping for formal
languages like mathematics, provided
one can fix an interpretation function.
On the other hand, 20th-century logicians
(notably Alfred Tarski) warned of the limits of the framework when applied to
human languages. Now, to the extent it
embraces a set-theoretic semantics (as
in the W3C’s Ontology Web Language),
the Semantic Web seems to be facing
exactly such limitations or experiencing,
dealing with, and suffering them.
Most Web content is expressed as
natural language, and it is not easy for
programmers to bring it into clean logical form; meanwhile, Percy Liang’s article “Learning Executable Semantic
Parsers for Natural Language Understanding” (also Sept. 2016) gave an idea
of the early stage of “semantic parsing,”
or the task of obtaining a formal representation of the meaning of a given
text. It seems the “new semantics” in
Bernstein et al., albeit not formally
characterized, was an attempt to outline a better approach to tapping the
linguistic nature of the Web, which is
indeed remarkable.
In taking a language-oriented view,
however, Bernstein et al. seemed to
neglect a key feature of formal seman-
tics—transparency. They seem com-
fortable with the relaxation of logic as a
conceptual framework for the Semantic
Web, which is typical of modern Knowl-
edge Graphs (such as the one Google
uses). But one of the consequences of
such relaxation is that part of data se-
mantics ends up being embedded in
algorithms. Not only practitioners but
also common users are aware that algo-
rithms that work on Web data are em-
bedded in only a few monolithic, private
platforms that are far from open, trans-
parent, and auditable.
Isn’t keeping meanings in a hand-
ful of proprietary algorithms exactly
the opposite of what the Semantic Web
was meant to be?
Guido Vetere, Rome, Italy
Authors Respond:
As we mentioned in the Viewpoint, the
Semantic Web is not just about texts but also
about myriad data, images, video, and other
Web resources. While a formal logic that
could be both transparent enough for all such
resources and yet usable by Web developers
is a noble ambition, current logics are simply
not up to the task. The transparency of
“some semantics” is the best to hope for and
would allow all potential developers to build
Web-scale, best-effort applications.
Abraham Bernstein,
Zürich, Switzerland,
James Hendler, Troy, NY, and
Natalya Noy, Mountain View, CA
More Than a Red Flag for AIs
Toby Walsh’s Viewpoint “Turing’s Red
Flag” (July 2016) proposed a legislative
remedy for various potential threats
posed by software robots that humans
might mistake for fellow humans. Such
an approach seems doomed to fail.
First, unless a “red flag” law would be
adopted by all countries, we humans
would have the same problems identifying and holding accountable viola-tors we have with cybercrime generally.
Second, though Walsh acknowledged
it would take a team of experts to devise an effective law, it would likely be
impossible to devise one that would
address all possible interactions with
non-humans or not lead to patently
silly regulations, as with the original
19th-century Red Flag Act. How would
a law handle algorithm-based securities trading? How about if one human
is dealing with another human, but
Reclaim the Lost Promise of the Semantic Web
DOI: 10.1145/3013930