I
M
A
G
E
B
Y
A
N
T
O
N
I
O
G
U
I
L
L
E
M
of these are described in the literature
[ 1]. Recommendations for think-
aloud studies, modifications of survey
methods, uses of Wizard of Oz, and
new techniques including peer tutoring
are all discussed. The most essential
element in developing methods for use
with children is to pilot any method
with children of the appropriate age.
Many studies fail because the adult
evaluator is too far removed from the
children in terms of understanding
their vocabulary, their abilities, their
context, and their motivations, so
the experience at best is bad, and at
worst is damaging for the children
participating. These sorts of studies
may well expose many problems
with software and gather some half-
useful opinions, but they damage the
reputation of the CCI community
and do little to encourage children to
explore science and scientific inquiry,
which one would hope might be a
by-product of participating in a well-
structured usability study.
This leads to the third issue when
carrying out usability studies with
children. There is understandable
concern about the ethics of including
children as contributors to software
development or research. In the main,
the ethics of child involvement involve
issues of consent. In university work,
there is generally a requirement
that work with children is cleared
through an ethics review process in
which adults are asked to explain how
children’s consent will be gained and to
detail what information will be given
to parents and children ahead of, and
during, the study. These processes are
intended to protect the institution (in
these cases, the university). Optimally,
the completion of an ethics form would
raise awareness of issues around the
informing and consent of children and
would result in the children being more
carefully thought of before any usability
study. In reality, these processes often
have individuals completing the forms
with a set of known protocols.
Our work has sought to better
manage and understand the ethics
around children’s participation in
HCI research—as both designers and
evaluators. This has led us to develop a
protocol for examining our work that is
over and above a standard ethics form.
Our starting point has been what we
tell the children. In earlier work, we
would typically begin an evaluation
session by telling the children that we
needed their feedback for our software
development; this was true but was
a scant explanation of what we were
doing. On examination, we realized
that we were not being entirely
honest and that we should perhaps
talk more about research, about our
university, about funding, about their
participation, and about possible
future uses of the data or information
we were collecting. Two checklists,
CHECk1 and CHECCk2, have been
developed that help us examine these
aspects, where we ask questions of
ourselves and consider the “honest”
reason as well as the “excuse” reason.
One such question is “Why are these
children chosen?” An excuse answer
for this might be “because we know
they can give us great feedback,” but
a more honest answer might be that
their school was the first to offer us a
chance to work with them. The first set
of questions (CHECk1), as applied to
evaluations, includes:
What are we aiming to evaluate?
• Why this product? (Excuse answer)
• Why this product? (Honest answer)
What methods are we using?
• Why these methods?
(Excuse answer)
• Why these methods?
(Honest answer)
Which children will we work with?
• Why these children?
(Excuse answer)
• Why these children?
(Honest answer)
The process of going through these
questions helps us examine what we will
say to the children so that in the second
checklist (CHECk2) we are asking
ourselves why we are doing things and
what we will tell the children [ 2].
• Why are we doing this project?
What do we tell the children?
• Who is funding the project? What
do we tell the children?
• What might happen in the long
term? What do we tell the children?
• What might we publish? What do
we tell the children?
Here is an example of how these
checklists might be applied. It is a
common idea in HCI and CCI to
carry out a user evaluation after the