@INTERACTIONSMAG 68 INTERACTIONS JANUARY–FEBRUARY2019
This is a forum for perspectives on designing for communities marginalized by economics, social status, infrastructure,
or policies. It will discuss design methods, theoretical and conceptual contributions, and methodological engagements for
underserved communities. — Nithya Sambasivan, Editor
FORUM THE NEXT BILLION
Nithya Sambasivan and Jess Holbrook, Google
languages and diverse literacies will
come into the fray. It is estimated that
40 percent of Nigerians and 29 percent
of Indians were non-literate in 2015
[ 1]. Many of the current assumptions
of AI systems may need rethinking,
such as what constitutes user models,
meaningful interactions, training data,
and signals to improve AI models.
Consider the possibility that lower
literacy may shape their technology
use to specific online activities, such
as visual browsing or memorized
sequences. A corresponding
consideration is whether lower-literacy
users may be served low-quality or
impersonal outputs by current AI
models, as the models may use features
based on majority literate users.
For example, if a user chooses three
videos in a row based on the relevance
of video thumbnails or simply the
order of presentation (versus the
actual user goals), the models can
form a kind of feedback loop where
more results based on the spurious
behavioral signals are presented and
watched because the user is unaware
of any alternatives. In addition, the
formulation of queries and requests
needs to be designed to include low-literate users, as they face difficulty
with abstraction (an ability derived
from formal education), shown in
research by Thies et al. [ 2].
Many countries in the Global
South are officially or informally
multilingual, with a combination of
official, native, and trade languages.
Multilingualism and vernacular
languages present interesting new
challenges for NLP and AI. As voice
interfaces surge in popularity, we
AI is starting to be integrated into
diverse domains of human life.
And as countries like India, Brazil,
and Nigeria experience massive
growth online, AI technologies are
increasingly intersecting with new user
groups, applications, datasets, and
regulations.
Much of AI’s path has been
shaped by its originating contexts in
Western nations. As AI touches the
fundamental underpinnings of the
technological universe, it is incumbent
upon us in the HCI community to
ask the who questions. We call upon
the community to challenge implicit
assumptions and biases, and to
integrate various global communities
into the discourse and development
of AI. The ground realities of
growing Internet penetration, novel
applications, multiple languages,
low-end devices, services with
global reach, cultural norms, and
more require globally relevant AI
models and products. While AI is
still emerging in the Global South,
engaging and analyzing today will help
us create an inclusive AI in the near
future. An intimate understanding
of user practices, value systems, and
implications for various communities
worldwide is essential to creating
human-centric AI that is meaningful
and ethical for all.
In this article, we present research
provocations for AI for the next billion
users, to spur a conversation on the
implicit beliefs, biases, and issues
that may be normalized in AI. As
much of AI’s functioning is still not
well understood or fully developed,
we believe these areas for research
are crucial to shaping inclusive AI as
it becomes more complex, powerful,
and present in daily life. We bring
our perspectives as HCI and social
scientists who work closely with
AI researchers. We have started to
address some of these areas in our
research and invite further exploration
from the research community.
BE ROBUST TO
MULTILINGUALISM AND
DIVERSE LITERACIES
As the profiles of people coming online
change over the next few years, new
Insights
→ Implicit beliefs, biases, and issues
from Western contexts may be
normalized in AI, making it less
globally responsible.
→ We need to employ a global lens
in our core AI assumptions,
whether it is the training data,
model performance, or explainability
of systems.
→ It’s important to ask the difficult
questions on positive and negative
impacts early on, rather than
introducing repairs and post-hoc fixes.
Toward Responsible AI
for the Next Billion Users