alternatives, and outcomes. It is vital to
understand where and how social ills
such as bias can be expressed and reinforced by digital technologies. Algorithmic bias can be addressed and, for this
reason, critics who suggest these technologies necessarily will exacerbate
bias are too pessimistic. Digital processes create a record that can be examined and analyzed with software tools.
In the analog world, ethnic or other
kinds of discrimination were difficult
and expensive to study and identify. In
the digital world, the data captured is
often permanent and can be analyzed
with existing techniques. Although
digital technologies have the potential
to reinforce old biases with new tools,
they can also help identify and monitor
progress in addressing ethnic bias.
1. ACM. Public Policy Council: Statement on Algorithmic
Transparency and Accountability. (2017), 1–2;
2. Ananny, M. and Crawford, K. Seeing without knowing:
Limitations of the transparency ideal and its
application to algorithmic accountability. New Media
and Society 20, 3 (Mar. 2018), 973–989.
3. Barocas, S. et al. Big Data, Data Science, and Civil
Rights. arXiv preprint arXiv:1706.03102 (2017).
4. Caliskan, A., Bryson, J.J., and Narayanan, A.
Semantics derived automatically from language
corpora contain human-like biases. Science 356, 6334
(2017), 183–186; https://doi.org/10.1126/science.aal4230
5. d’Alessandro, B., O’Neil, C., and LaGatta, T.
Conscientious classification: A data scientist’s guide to
discrimination-aware classification. Big Data 5, 2 (Feb.
6. Danks, D. and London, A.J. Algorithmic bias in
autonomous systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial
Intelligence (Aug. 2017), 4691–4697.
7. Kleinberg, J. et al. Human decisions and machine
predictions. Quarterly Journal of Economics 133, 1
(Jan. 2017), 237–293.
8. Lessig, L. Code: And Other Laws of Cyberspace (2009);
9. O’Neil, C. Weapons of Math Destruction: How Big Data
Increases Inequality and Threatens Democracy.
Broadway Books, New York, 2016.
10. Peck, P. They’re watching you at work. The Atlantic
(Dec. 2013); https://bit.ly/2jhKIt4
11. Portal, EU GDPR. Key Changes with the General Data
Protection Regulation. EU GDPR Portal (2017).
12. Silva, S. and Kenney, M. Algorithms, platforms, and ethnic
bias: An integrative essay. Phylon: The Clark Atlanta
University Review of Race and Culture 55, 1–2 (2018).
13. Zarsky, T. The trouble with algorithmic decisions: An
analytic road map to examine efficiency and fairness
in automated and opaque decision making. Science,
Technology, and Human Values 41, 1 (Jan. 2016), 118–132.
Selena Silva ( firstname.lastname@example.org) is a research
assistant at the University of California, Davis, USA.
Martin Kenney ( email@example.com) is a
Distinguished Professor in the Department of Human
Ecology at the University of California, Davis, CA, USA, and
is Research Director for the Berkeley Roundtable on the
International Economy, Berkeley, CA, USA.
This research was funded in part by the Ewing Marion
Kauffman Foundation and Clark Atlanta University. The
contents of this Viewpoint are solely the responsibility of
Copyright held by authors.
the law enforcement community make
decisions that are affected by a defendant’s “demeanor,” dress, and other
characteristics that may correlate with
ethnicity—an algorithmic process
does not “see” these characteristics.
This offers the potential for mitigating such bias. For example, Kleinberg
et al. created a machine-learning algorithm that could do a better job than
judges in making bail decisions.
algorithm was optimized to reduce
ethnic disparities among those incarcerated while also reducing the rate
of reoffending. This optimization was
possible because a disproportionately
high number of people in certain racial
groups are incarcerated. The point is
that it is possible to design algorithms
with different social goals. Critics ignore the fact the data and tools can be
used to decrease inequity and improve
efficiency and effectiveness.
Because algorithms are machines,
they can be redesigned to improve
outcomes. To illustrate, sales websites
could reengineer a site to, for example,
provide greater anonymity and thus
reduce opportunities for consumer
bias. Because all digital activities leave
records, it is easier to detect biased behavior and thus reduce it. For example,
a government agency could study on-line behavioral patterns to identify biased behavior. If it can be identified,
then it can be prevented. For example,
it would be easy to assess whether consumers are biased in their evaluations
of online vendors and impose a standardization algorithm to mitigate such
bias. Thus, while platforms and algorithms can be used in a discriminatory
manner, they also can be studied to
expose and address bias. Of course, the
will to do so is necessary.
Computer scientists have a unique
challenge and opportunity to use
their skills to address the serious so-
cial problem of bias. We contribute
to increased awareness by develop-
ing a readily understandable visual
model for identifying where bias might
emerge in the complex interaction be-
tween algorithms and humans. While
we focus on ethnic bias, it is possible to
extend our model to other types of bias.
The model can be particularly useful in
policy discussions to explain to poli-
cymakers and laypersons where a par-
ticular initiative could have an impact
and what would not be addressed.
Interest in mitigating algorithmic
bias has increased, but “correcting”
the data to increase fairness can be
hampered by determining what is
“fair.” Some have suggested that trans-
parency would provide protection
against bias and other socially unde-
2 Leading comput-
ing professional organizations such
as ACM are aware of the problems and
have established principles to guide
their members in addressing these is-
sues. For example, in 2017 the ACM
Public Policy Council issued a state-
ment of general principles regarding
algorithmic transparency and account-
ability that identified potential bias as
a serious issue.
1 Unsurprisingly, firms
resist transparency, maintaining that
revelation of their data and algorithms
could allow other actors to game their
systems. In many cases, this response
is valid, yet it is also self-serving as it
prevents scrutiny. Software developers
often cannot provide definitive expla-
nations of complex algorithmic out-
comes, meaning transparency alone
may be unable to provide accountabil-
ity. Further, a single algorithmic model
may contain multiple sources of bias
that interact, creating greater difficulty
in tracing its source. However, even in
such cases, outcomes can be tested to
discover evidence of potential bias.
Platforms, algorithms, software,
data-driven decision-making, and ma-
chine learning are shaping choices,
Interest in mitigating
the data to increase
fairness can be
what is “fair.”