FORUM EVALUATION AND USABILITY
less able to understand why they
are participating. They need to be
able to remove their consent, and
to do that they have to be clearly
informed. As participants in research
(often, usability studies), they should
understand the research as well as
the evaluation activity. This should
occur at the beginning of an evaluation
study, because if these things cannot
be reasonably justified, the evaluation
should not take place. Children
should be able to use methods they
can relate to and understand so their
contributions can be meaningful to
them as well as to the adult evaluator.
These methods do need testing, but
children need to be aware when that
is the case. Individuals carrying out
usability studies with children need to
be trained in the ways of children and in
the mechanisms around their contexts.
Especially when studies are located
at schools, adult evaluators must be
familiar with how schools work and be
sensitive to their needs.
Working with children is
highly rewarding but comes with
responsibilities. Opportunities to
engage with children in evaluation
and usability studies are also
opportunities to introduce children to
science and scientific thinking, to new
technologies, and, in some cases, to
higher education—for these reasons, it
is important to get it right.
Endnotes
1. Markopoulos, P. et al. Evaluating
Interactive Products for and with Children.
Morgan Kaufmann, San Fransisco, 2008.
2. Read, J.C. et al. CHECk: A tool to inform and
encourage ethical practice in participatory
design with children. CHI' 13 Extended
Abstracts on Human Factors in Computing
Systems. ACM, New York, 2013, 187–192.
3. Read, J.C., Fitton, D., and Horton, M.
Giving ideas an equal chance: Inclusion
and representation in participatory design
with children. Proc. of the 2014 conference
on Interaction Design and Children. ACM,
New York, 2014, 105–114.
Janet Read is a professor of child computer
interaction working in the U.K. She has been
working in CCI for more than 15 years. She is
the chair of the IFIP TC13 SIG on Interaction
Design and Children and editor in chief of the
International Journal of Child Computer Interaction.
→ jcread@uclan.ac.uk
development of a product. Recently
our group developed a small pod
device for use with teenagers. A logical
process would be to evaluate this pod
with teenagers from a school. In the
examination of this evaluation, the
questions could be answered as follows:
What are we aiming to evaluate? A pod
that we have made.
• Why this product? Excuse answer:
Because it will eventually save energy,
which will be great for the planet.
• Why this product? Honest answer:
Because we have made this and want to
finish the project that funded it.
What methods are we using? The Fun
Toolkit.
• Why these methods? Excuse
answer: They are specially designed
for use with children.
• Why these methods? Honest
answer: So we can also use the data
for a paper on the Fun Toolkit.
Which children will we work with?
The teenagers in the local school.
• Why these children? Excuse
answer: They can be great evaluators.
• Why these children? Honest
answer: They were convenient to recruit.
This rather tongue-in-cheek
completion of this checklist exposes a
subplot of the user evaluation: the desire
to get some data to check out the method.
In completing the second set of
questions (as per here):
• Why are we doing this project?
Because we believe in it. What do we
tell the children? The story about how
we were attracted to the funding and
about how excited we are to make a
small difference.
• Who is funding the project? Research
councils, taxpayers, government. What
do we tell the children? As above, but
make it clear to them.
• What might happen in the long
term? The product might go on
general sale. What do we tell the
children? That they are contributing
to scientific advancement.
• What might we publish? We might
publish about the methods used. What
do we tell the children? Explain how
publishing works and about methods
being developed and about how we
might evaluate them.
This second process highlights
that there is a need to clearly explain
to children how research might also
be generated even from an evaluation
study, but, interestingly, it also brings
up the by-product of working with
children in this way, which is the
opportunity to use evaluations with
children as a means to expose children
to scientific thinking.
The development and use of the
CHECk1 and CHECk2 tools has
resulted in our taking a much more child-centered approach to consent, where we
begin every study with an explanation
of why these children are included, what
research is, what the university does,
who is funding the work, and where
the work might end up. We have made
it a recent requirement to, wherever
possible, return to the children with
results when these have appeared as
academic papers or as products.
In our thoughts about where
results might end up, which is always
a very difficult question to answer, we
have been looking at how to explain
and justify to children where their
contributions go in studies of children as
participants in design [ 3]. In a usability
study involving several children,
our current view is that each child
should be able to see the value of their
contribution to the ultimate evaluation
and development of any product. This
raises questions about children being
“used” to simply gather research data
or test out a new product, so any adult
evaluator has to be absolutely clear
about the children’s individual, as well
as collaborative, contributions. At this
juncture, the would-be evaluator has
an important question to add to the
CHECk1 list: Beyond “Why these
children?” is the question “Why ALL
these children?” Which is to ask: Can
the inclusion of each child be justified
and rationalized? Is each child clearly
contributing, or are some children
simply making up the numbers?
Children participating in usability
studies and evaluation studies are not
the same as adults. Their inclusion
has to be justified, because they are
DOI: 10.1145/2735710 © 2015 ACM 1072-5520/15/03 $15.00