To Model Complexity
in Fiction, Try Fractals
Robin K. Hill raised an interesting
point in her blog post “Fiction as
Model Theory” (Dec. 2016) that fictional characters and worlds need
to follow certain rules—rules that
can be formalized and verified for
consistency. Fiction in general, and
science fiction in particular, has always been of considerable interest
to scholarly researchers. What was
notable in Hill’s post was her suggestion of using formalism in rather
unconventional domains—domains
not traditionally identified with com-putation-related methods.
I have personally taken a similar
path and, together with my colleagues,
discovered the utility of formalizing
ideas from unconventional domains.
These range from modeling complex
living environments in self-organizing
arrays of motion sensors to identifying unexpected emergent patterns in
the spread of disease in large-scale
human populations or even in cousin
marriages.
1 Likewise, I have found that
formal specification can prove useful
in terms of representing community-identified cognitive development of
scholarly researchers measured as a
function of their citation indices.
2
Could a longer work of fiction, say,
a novel or novella, benefit from such
treatment? After all, well-written novels often invent their own internally
consistent landscapes. They also often involve a rather complex interplay of characters, multiple plotlines,
backstories, and conflicts. Scholarly
researchers have even identified social
networks of fictional characters influencing major events in these make-believe worlds. It is indeed the interplay of characters in conflict that
makes for a potential page-turner or,
at least, a novel worth reading.
While fiction authors have developed their own instruments, ranging
from Randy Ingermanson’s so-called
“snowflake method” to Shawn Coyne’s
THE VIE WPOINTS by Alan Bundy “Smart Machines Are Not a Threat to Humanity” and Devdatt Dubhashi and Shalom Lappin “AI Dangers:
Imagined and Real” (both Feb. 2017)
argued against the possibility of a
near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on
the lack of direct relevance of Moore’s
Law, noting raw computing power
does not by itself lead to human-like
intelligence. Bundy also emphasized
the difference between a computer’s
efficiency in working an algorithm to
solve a narrow, well-defined problem
and human-like generalized problem-solving ability. Dubhashi and
Lappin noted incremental progress in
machine learning or better knowledge
of a biological brain’s wiring do not
automatically lead to the “
unanticipated spurts” of progress that characterize scientific breakthroughs.
These points are valid, but a more
accurate characterization of the situation is that computer science may
well be just one conceptual breakthrough away from being able to build
an artificial general intelligence. The
considerable progress already made
in computing power, sensors, robotics, algorithms, and knowledge about
biological systems will be brought to
bear quickly once the architecture of
“human-like” general intelligence is
articulated. Will that be tomorrow or
in 10 years? No one knows. But unless
there is something about the architecture of human intelligence that is
ultimately inaccessible to science, that
architecture will be discovered. Study
of the consequences is not premature.
Martin Smith, McLean, VA
ACM Code of Ethics vs.
Autonomous Weapons
“Can We Trust Autonomous Weapons?” as Keith Kirkpatrick asked at
the top of his news story (Dec. 2016).
Autonomous weapons already exist
on the battlefield (we call them land
mines and IEDs), and, despite the
1997 Ottawa Mine Ban Treaty, we see
no decrease in their use. Moreover,
the decision as to whether to use
them is unlikely to be left to those
who adhere to the ACM Code of Ethics. The Washington Naval Treaty of
1922 was concluded between nation-states—entities that could be dealt
with in historically recognized ways,
including sanctions, demarches, and
wars. An international treaty between
these same entities regarding autonomous weapons would have no effect
on groups like ISIS, Al-Qaida, Hezbol-lah, the Taliban, or Boko Haram. Let
us not be naïve … They have access to
the technology, knowledge, and materials to create autonomous weapons,
along with the willingness to use them.
When they do, the civilized nations of
the world will have to decide whether
to respond in kind—defensive systems
with sub-second response times—or
permit their armed forces to be outclassed on the battlefield. I suspect the
decision will seem obvious to them at
the time.
Joseph M. Saur, Virginia Beach, VA
It was rather jarring to read in the
same issue (Dec. 2016) a column
“Making a Positive Impact: Updating
the ACM Code of Ethics” by Bo Brink-man et al. on revamping the Code and
a news article “Can We Trust Autonomous Weapons?” by Keith Kirkpatrick on autonomous weapons. Such
weapons are, of course, enabled
entirely by software that is presumably written by at least some ACM
members. How does the Code’s “Do
no harm” ideal align with building
devices whose sole reason for existing is to inflict harm? It seems that
unless this disconnect is resolved
the Code is aspirational at best and
in reality a generally ignored shelf-filling placeholder.
Jack Ganssle, Reisterstown, MD
Address the Consequences of
AI in Advance
DOI:10.1145/3047147