team composition and team expertise
as additional categories.
Taxonomies of existing biases
can be very helpful here but must be
turned into pragmatic support for
teams. Certain types of taxonomies
are easier than others in this regard.
For example, when we reviewed
Friedman and Nissenbaum’s
taxonomy of biases in computational
systems, it included categories such
as preexisting bias, technical bias,
and emergent bias. While this work
was informative, it was difficult to
use the taxonomy in practice, as the
categories did not point to underlying
causes to evaluate or actions to
take. More recent taxonomies of
algorithmic and data biases allowed
us to classify problems in a way that
points out how to intervene.
For data bias, we used Olteanu
et al.’s perspective on systemic
distortions that compromise
data’s representativeness [ 8] as
a starting point. The framework
comprehensively examines biases
introduced at different levels of
data gathering and usage. While
it originally focuses on social
data analysis, we were able to
translate it for our purposes. It also
raises an interesting dilemma, as
representative data may reflect
existing societal biases and existing
disadvantages. Similarly, the bias
taxonomy by Ricardo Baeza-Yates
[ 9] was relatively easy to translate.
It consists of six types: activity bias,
data bias, sampling bias, algorithm
bias, interface bias, and self-selection
bias. These biases form a directed
cycle graph; each step feeds biased
data into the next stage, where
additional biases are introduced. The
model’s breakdown potentially makes
it easier to find targets for initial
We summarized three categories
of entry points for biases (Figure 3):
• Data: characteristics of the input
• Algorithm and team: model
characteristics as well as team
• Desired outcomes, such as
recommendation content and served
The checklist asks whether each
identified bias is expected to affect the
project’s results or its priority, and
what could be done to address issues.
While such checklists can never be a
complete overview, they can support
prioritization and education.
Checklist lessons learned.
During the early pilot with our v0
checklist with two different machine-learning-heavy teams, data quality,
especially the differing availability of
historical data for different markets,
and the cyclical nature of bias were
reported as potential issues. The
effect of organizational structure
also became apparent in the pilot.
There could be biases in upstream
datasets, but it was not directly
apparent whether these datasets
could be changed, and who should
then take on that task—especially
when side effects could be unclear.
When services and pipelines build
on each other, alignment is necessary
across different teams to make
sure resourcing is most effective,
especially when infrastructure has
to be built. Teams own different
parts of the infrastructure. A change
in one pipeline may affect multiple
services, with potentially unforeseen
consequences for products in the wild.
Teams may also need help negotiating
with other teams’ priorities when
they find out they cannot fix an issue
themselves, or when they may be
affecting outcomes of another team’s
Communication Research Product + Tech Impact Method/Processes
Figure 1. Three types of effort required to address algorithmic bias.
shared framework and priorities
Product-area specific methods
Product x Product y Product z
Figure 2. Organization-wide and product-specific methods.
Figure 3. Three categories of bias entry points.
Data Algorithm + Team Outcomes
Model and team
1. Team and target decisions
2. Models and services