groups in industry and academia
pushing this conversation, including
the People + AI Research team (PAIR)
at Google (you can see our thinking on
a human-centered approach to AI at
https://design.google/library/ai/), the
AI Now Institute, and the Algorithmic
Fairness and Opacity Working Group.
Our call to action is to employ a
global lens to our core AI assumptions,
whether it is the training data, model
performance, explainability of systems,
or deciding which human needs to
address with AI in the first place.
Only then can we truly make AI work
for humanity.
Endnotes
1. CIA World Factbook, 2015; https://www.
cia.gov/library/publications/the-world-factbook/fields/ 2103.html
2. Thies, I.M. User interface design for low-
literate and novice users: Past, present
and future. Foundations and Trends®
in Human–Computer Interaction 8, 1
(2015), 1–72.
3. Sambasivan, N., Checkley, G., Batool,
A., Gaytán-Lugo, L.S., Matthews, T.,
Consolvo, S., and Churchill, E. “Privacy
is not for me, it’s for those rich women”:
Performative privacy practices on mobile
phones by women in South Asia. Proc. of
SOUPS 2018. USENIX Association, 2018.
4. ITU Facts and Figures, 2017.; https://www.
itu.int/en/ITU-D/Statistics/Documents/
facts/ICTFactsFigures2017.pdf
5. UNICEF State of the World’s Children,
2017; https://www.unicef.org/publications/
files/SOWC_2017_ENG_WEB.pdf.
6. Ayyub, R. In India, journalists face
slut-shaming and rape threats. New York
Times. May 22, 2018; https://www.
nytimes.com/2018/05/22/opinion/india-
journalists-slut-shaming-rape.html
Nithya Sambasivan is a UX researcher in
Google AI. She co-leads research on building
human-centered AI in emerging markets with
Jess Holbrook. She has a Ph.D. from UC Irvine
and an M.S. in HCI from Georgia Tech. Her
research has won top awards at HCI and IC TD
conferences.
→ nithyasamba@google.com
Jess Holbrook is a UX manager and UX
researcher in Google AI. He and his team take
a human-centered and technology-inspired
approach to building AI-powered products like
Google Clips, Lens, and AI Y. He co-leads the
People + AI Research (PAIR) group, to provide
accessible AI to help people solve meaningful
problems themselves. He has a Ph.D. in
psychology from the University of Oregon.
→ jessh@google.com
CHECK THE POTENTIAL
SOCIETAL ABUSE OF AI
As AI becomes more sophisticated
in its ability to not only detect and
recognize but also manipulate
entities, it increases the risk of
causing serious damage to various
underserved communities. An
important research area lies in
proactively understanding the
potential algorithmic manipulation
of any personally identifiable or
attributable content to cause harm
to individuals. Deepfakes, fake
images, and videos powered by deep
learning present new challenges with
manufacturing fake news, malicious
content, and pornographic content.
Such synthesis techniques can cause
serious harm to non-privileged
groups through wider circulation on
social networks, with broad social
repurcussions. Take the case of Indian
journalist Rana Ayyub, who discusses
how she has been constantly harassed
through deepfake pornographic
videos made of her, causing massive
reputation damage [ 6]. Such image-manipulation incidents travel virally
and further impact online expression;
in our research on gender equity, we
found that 61 percent of women across
seven countries, including Brazil,
India, and Indonesia, proactively
uploaded profile photos using non-face images like flowers, animals,
landscapes, and group photos to avoid
personal-image manipulation, based
on incidents they had heard about in
the news.
Safety issues are complex and
require the joint involvement of
technology policy, law and order, and
institutional change for any lasting
change. Safeguards against bad
actors and anti-abuse management
are essential in the formative design
principles of systems. Technology
moderation and takedown policies
should grow to encompass various
cultural contexts. We should practice
inclusive and participatory design and
always consider all the stakeholders
of the system; even if we leave out
5 percent of stakeholders and the
technology works well for 95 percent
of cases, there is the potential for
unintended consequences.
UNDERSTAND AI POLICY
IMPLICATIONS GLOBALLY
A growing number of AI researchers
are building laudable applications for
social good domains in healthcare,
agriculture, social justice, and more. At
the same time, we should not lose sight
of how current AI trends and policies
on automation and digitization affect
societies all over the world. Difficult,
polarizing questions are being raised
about the impact of automation on
jobs, skills, and wages. Most Global
South economies are heavy on the
informal and outsourcing sectors, such
as call centers, data entry, and low-level factory jobs. Entire industries
are vulnerable to job displacement.
In our ongoing research on the future
of work in vocational sectors in India,
we find that most technicians have
little to no awareness of the future-of-work discourse or development.
Skill reinvention of economically
disadvantaged workers is key to
resilience and job readiness in the
future. Policy interventions for a jobless
future like universal basic income are
being proposed in the Global North,
but such proposals need to take into
consideration the realities of low
incomes, corruption, and very large
populations in other regional contexts.
Governments in the Global South are
increasingly pushing for the digitization
of nation states to eliminate middlemen,
for social welfare decisions, and to
increase resource-allocation efficiency.
Algorithms and data are now being used
for human welfare in the Global South,
for example, vaccine deployments and
food rations. Citizenry are increasingly
defined by digitization, such as the
Aadhar program. In a context where
large groups of people are already below
the poverty line and have fragile access
to social welfare, errors and biases in
automated decision making can be
serious (e.g., misallocating food rations).
Audits, interface evaluations, and public
user studies could bring the concerns to
the fore.
A CALL TO ACTION
It is our responsibility as the HCI
research community to influence the
ways in which AI is perceived, adopted,
and normalized globally. There are
DOI: 10.1145/3298735 COPYRIGHT HELD BY AUTHORS