bias can occur from either side, or party,
in a digital interaction. Or, even more
deliberately, anonymous online hackers purposely “taught” Microsoft’s Tay
chatbot, which was opened to the public
for only a few days in 2016, to respond
with racially objectionable statements.
Effectively, the algorithm or platform
provides users with a new venue within
which to express their biases.
Feedback Loop Bias. Algorithmic systems create a data trail. For example,
the Google Search algorithm responds
to and records a query that becomes
customized input for subsequent
searches. The algorithm learns from
user behavior. For example, in predictive policing, the algorithm relies almost entirely on historical crime data.
Suppose the algorithm sends police
officers into a neighborhood to prevent crime. Not surprisingly, increased
police presence leads to higher crime
detection, thereby raising the statistical crime rate. This can motivate the
dispatch of more police, who make
more arrests, thereby initiating a feedback loop. In another example, Google
Search can learn that ethnically biased
websites are often selected and therefore recommend them more often,
thereby propagating them. As smart as
algorithms can be, human monitoring
continues to be necessary.
Benefits of Platforms
The potential benefits of algorithmic
decision-making are less noticed, but
it can also be used to decrease social
bias. It is well known that members of
can lead to discrimination. This is the
conundrum: in certain cases, such
variables must intentionally be used to
produce less-biased outcomes.
Algorithmic Processing Bias. Bias can
be embedded in the algorithm itself.
One source of such bias is the inclusion
and weighting of particular variables.
Consider the case of a firm’s chief scientist’s finding that “one solid predictor
of strong coding is an affinity for a particular Japanese manga site.”
10 If this is
embodied in job-candidate-sorting software, then this seemingly innocuous
choice might exclude particular qualified
candidates. Effectively, a desired proxy
trait inadvertently excludes certain
groups that could perform the job.
Transfer Context Bias. Transfer context bias occurs when algorithmic output is applied to an inappropriate or unintended context. One example is using
credit scores to make hiring decisions.
Bad credit is equated with inferior future
job performance, despite little evidence
that credit scores are related to work
performance. If the undesirable, but irrelevant trait is correlated with ethnicity,
then it might lead to biased outcomes.
Interpretation Bias. Interpretation
bias arises when users interpret algorithmic outputs according to their internalized biases. For example, a judge
can receive an algorithmically generated recidivism prediction score and decide on the punishment or bail amount
for the defendant. Because individual
judges may be unconsciously biased,
they may use the score as a “scientific”
justification for a biased decision.
Outcome Non-Transparency Bias.
Algorithms, particularly artificial intel-
ligence and machine learning, often
generate opaque results. The reasons
for the results may even be inexplicable
to the algorithm’s creators or the soft-
ware’s owner. For example, when a ma-
chine-learning program recommends
denial of a loan application, the bank
official conveying the decision may not
know the exact reasons for denial. The
absence of transparency makes it diffi-
cult for the subjects of these decisions
to identify discriminatory outcomes or
even the reasons for the outcome.
Automation Bias. Automation bias
results from the belief the output is
fact, rather than a prediction with a con-
fidence level. For instance, credit deci-
sions are now fully automated and use
group aggregates and personal credit
13 The algorithm gives certain
people lower scores and limits their
access to credit. Credit denial means
their scores cannot improve. Often, the
subjects and decision-makers are un-
aware of the algorithm’s assumptions
and uncritically accept the decisions.
The European Union’s GDPR’s Article
22 has attempted to provide some pro-
tection by limiting automated algorith-
mic decision processes for legal or the
equivalent life-affecting decisions.
Consumer Bias. The biases that hu-
man beings act upon in everyday life are
expressed in their online activities. Fur-
ther, digital platforms can exacerbate or
give expression to latent bias in online
behavior. Users may consciously or un-
consciously discriminate on the basis
of a user profile that contains ethnically
identifiable characteristics. Consumer
Potential biases and where they may be introduced in the algorithmic value chain.
1. Training Data Bias
2. Algorithm Focus Bias
User-Modified Data Fed Back into Input
3. Algorithmic Processing Bias
9. Feedback loop bias
Source: The first six biases were adapted from Danks, D., & London, A. I. (2017).
The visualization and remaining materials are by Silva and Kenney.
4. Transfer Context Bias
5. Interpretation Bias
6. Outcome Non-Transparency Bias
7. Automation Bias
8. Consumer Bias