load files but also to upload them. We felt this
would allow the flexibility that Gaver had recommended, which would allow the users to place
their own interpretation on the technology.
Before deploying the system, we had to think
about how we would evaluate it. What would
count as success? Our experience in running evaluations in the developing world has shown a very
strong Hawthorne effect, with the subjects keen
to give experimenters the results they require.
Clearly, observing what was going on at the community hall would preclude unbiased results. In
Gaver’s work, he employed professional documentary makers who would interview participants
and create a video presentation that drew out the
points that seemed most relevant to the subjects.
In this way, Gaver’s team could get an unbiased
interpretation of the results from the intervention.
In our project, we felt nervous about unleashing
a documentary team on our subjects, who would
have had little experience in dealing with interviews. Having them followed around by a video
team was unlikely to create an accurate reflection
of the users’ true feelings. Instead, we adapted
the documentary approach to our context. Rather
than finding a professional team to do the interviews, we recruited two journalism students, who
were from the same language group as our subjects. These students were able to interview the
subjects in a nonthreatening way, but due to their
training, they could report results to us in a way
that allowed us to assess the technology’s impact.
We told the students only that we wanted them to
find out how some new technology had affected
the lives of the people living in the target community; they didn’t know that we had placed the
technology there in the first place.
So, after training some users on how to interact
with the system and lining up the journalists to
conduct the evaluation, we sat back to see what
The results from the intervention were both
surprising and encouraging. As Gaver had predicted, no amount of ethnographic study or consultation would have predicted the ways in which
people used the system. For example, the board
became a venue for women in choirs to exchange
local gospel music. On weekends they recorded
their performances on the handset and then
uploaded the recordings to share with ladies from
other choirs. This usage was discoverable only
by creating a complete, robust system that users
could appropriate in ways that were truly complementary to their lifestyles.
So should we give up prototyping and just build
complete systems and hope they work out fine?
Definitely not. There are many instances in which
low-fidelity prototypes are entirely appropriate
and will help resolve design issues. However, when
one is considering how technology is appropriated (as opposed to discovering if it is usable), it
is important to remember that even the users
themselves cannot predict how a given technology
might fit into their lives.
Therefore, based on our recent experience, we
recommend that you not despair if users do not
appropriate your technology in the way you anticipated. Rather, embrace uncertainty and build
it in to the system so that users can modify the
technology to meet their needs. In our case, we did
that by not prescribing a use for the system—for
example, using it to distribute health information
to a user’s handset. By allowing users to contribute
any form of information to the system, we created
the space for them to explore ways of appropriating the technology. As a result, we found an application that was unlikely to have emerged through
any other means. By using journalism students
who were familiar with the users’ culture and
language, we were also able to assess the impact
of that application in a way that was not possible
using direct observations or even questionnaires—
the creation of a questionnaire would almost
certainly have required us to think of all possible
outcomes and focus our evaluation to extremes.
This experience has inspired me to think more
about why doing interaction design with developing-world users differs from working with users in
the developed world. Understanding the divergences will provide insight into which methods can be
applied in both developed and developing domains.
ABOUT THE AUTHOR Gary Marsden is
currently employed as an associate professor in the
department of computer science at the University
of Cape Town in South Africa. He was born in
Ireland, studied in Scotland, and had his first job in
London. Although his background is in computer
science, moving to South Africa has forced him to reconsider his
views about technology in general and HCI in particular.
March + April 2009
© 2009 ACM 1072-5220/09/0300 $5.00