challenge this bald determinism. Let’s
scrutinize those briefly.
Deductive closure includes propositions not immediately obvious.
But even where the programmers are not sure what exactly will happen, because of obscure compound conditions,
the algorithm does not “make
a decision.” What happens is
an implication of the assertions in force (written into
the code if the programmer
bothered to formulate assertions), that is, an implication
of the deductive closure. The
question whether programmers can be held responsible
for the distant eventualities
is significant, noting that
what we view as algorithmic
bias does not often seem deliberate. In any case, the deciding agent is certainly not
the machine.
Timing of interactions may result
in unanticipated outcomes, as in passive investment through computerized
stock trading.
But unexpected states do not
demonstrate demonic agency. Someone has decided in
advance that it makes sense
to sell a stock when it loses n%
of its value. That’s not what
we would call a real-time decision on the spot, because it ignores ( 1) the real time and ( 2)
the spot. We would correctly
call that a decision made earlier and elsewhere by system
designers, which played out
into unforeseen results.
The pattern-matching of deep
learning precludes the identification
of symbolic variables and conditions.
With no semantics available,
no agent prominent, and no
execution through a condi-
tional structure traceable, the
computer looks like the proxi-
mate decider. But no. If there
are training cases, some com-
plex combination of numeric
variables has developed from
given initial values which
were adjusted over time to
match a set of inputs with a
set of outputs, where those
matches were selected by the
systems designers. In unsu-
pervised learning, some sort
of regularities are uncovered,
regularities that were already
there in the data. Although it
may be tempting to say that
no one is deciding anything,
certainly no computer is mak-
ing anything that could be
called a decision. Someone
has planned antecedently to
seek those regularities.
Selection, recommender, and classification systems use the criteria
implemented in their decision structure. We in the trade all know that
whatever the algorithmic technique,
the computer is not deciding. To explain to the public that computers are
dumb may baffle and frustrate, rather
than educate. The malapropisms that
grant agency to algorithms confuse
the determination of responsibility
and liability, but also the public grasp
of Tech overall. People may attempt to
“persuade” the computer, or to try to
fix, enhance, or “tame” the programs,
rather than just rejecting their inappropriate deployment. At the extreme,
people feel helpless and fearful when
danger comes from beings like us—
willful, arbitrary, capricious—except
more powerful. Worse yet would be
apathy: Society may ignore the difficulties and become resigned to the
results, as if such programmed assessments were factive.
What would be the correct locu-
tion, the correct way to say it, passive
toward machine and active toward
programmer (or designer or developer
or specification writer or whomever)?
How should we note that “the deduc-
tive closure of home mortgage quali-
fication criteria entails red-lining of
certain neighborhoods”—other than
to say those exact words, which are not
compelling? How should we say that
“The repeated adjustment of weighting
criteria applied to a multi-dimensional
function of anonymous variables, close-
ly approximating an unknown function
for which some correct outcomes have
been identified by past users, associates
this individual record to your own dis-
crete declared criteria for a date”—with-
out saying “the dating app has chosen
this match for you”?
We have no other way of expressing
such outcomes easily. We lack the verbs
for computing that denote reaching
states that look like decisions, and taking actions that look like choices. We
need a substitute for “decides” in “the
algorithm decides that X,” something
to fill in the blank in “the program
_______ X.” Perhaps “the program fulfills X.” Perhaps “the program derives
that X.” Well ... this seems lame. The
trouble really is that we have to avoid
any verb that implies active mental
function. This is new. This is unique
to computing, as far as I can tell. The
Industrial Revolution brought us many
machines that seemed to have human
capacities, but they also had material
descriptions. For mechanical devices,
verbs are available that describe physical functionality without the implication of cognition: “The wheel wobbles.” “The fuel line clogged.” We may
say, jokingly or naively, that “the car
chooses not to start today,” but we are
not forced into it by lack of vocabulary.
For this new technological require-
ment, the best locution I can come up
with is, “the result of the programmed
assumptions is that X.” I have not
heard anyone seriously appeal to “com-
puter error” as a final explanation for
some time; that seems like progress in
understanding Tech. If we can forgo
that locution, maybe we can forgo
“biased algorithms.”
Any other ideas?
References
1. Angwin, J., et al. Machine Bias. ProPublica. May 23,
2016, http://bit.ly/2sGcEbH.
2. Banino, A., et al. 2018. Nature 557; pages
429–433. Vector-based navigation using grid-like
representations in artificial agents.
doi: 10.1038/s41586-018-0102-6.
3. Caporael, L.R. 1986. Anthropomorphism and
Mechanomorphism: Two Faces of the Human Machine.
Computers in Human Behavior 2; pages 215-234.
https://doi.org/10.1016/0747-5632( 86)90004-X.
4. Grothaus, M. Google’s AI is learning to navigate like
humans, Fast Company, May 15, 2018,
http://bit.ly/2JuyYzI.
5. Raymond, E. Anthropomorphization. From The Jargon
File, http://bit.ly/2kPmF2Z.
6. Williams, S. (TMFUltraLong). The Evolution of Stock
Market Volatility, The Motley Fool, Apr 3, 2018,
http://bit.ly/2xKrROY.
Robin K. Hill is adjunct professor in the Department of
Philosophy, and in the Wyoming Institute for Humanities
Research, of the University of Wyoming. She has been a
member of ACM since 1978.
© 2018 ACM 0001-0782/18/8 $15.00