percent) and open interviews ( 12
percent ), as well as user observation ( 17 percent), analysis of video
recordings ( 17 percent), and diaries ( 11 percent). We also found
an emerging group of constructive or projective methods, such
as probes, collages/drawings, and
photographs. Objective measurement of UX via psychophysiology
is rarely used. Thus, data on UX
is mostly collected with methods borrowed from traditional
HCI and usability research.
Most researchers agree that
UX occurs before, during, and
after interaction with products.
Therefore we analyzed in which of
these phases researchers assess UX.
Measures of UX before interaction is
rare ( 20 percent), while after measurements are the most frequent
( 70 percent). Correspondingly, the
most frequent pattern is the combination of during and after measurements—similar to traditional
usability research, where users are
observed when interacting, and
satisfaction is measured afterward.
Anticipated use assumes an important role in the field of UX and is a
major difference from traditional
HCI. We took a closer look at what
researchers measure before interaction occurs and found that only
five studies looked at users’ expectations about products. Thus, it seems
that before measurements in UX
are still widely disregarded. The
analysis of temporal aspects shows
that current UX research contains
no truly longitudinal studies. Some
papers study experience over several weeks, but projects that cover
typical product life cycles of several
months or even years are missing.
The choice of methods in UX
research suggests a revival of the
debate about qualitative and quan-
titative research methods. In partic-
ular we see that it leads to dichoto-
mous research. On the one hand,
some researchers study very par-
ticular use situations, emphasizing
richness of description and using
mainly qualitative research meth-
ods (we call those uniqueness studies).
On the other hand, some stud-
ies model the dimensions of UX,
emphasizing findings that general-
ize and using mostly quantitative
research methods (we call those
dimension studies). Some studies
overemphasize their methodological
stance within either of these meth-
odologies to the extent of damaging
research quality. Many unique-
ness papers do not report inter-
view questions or protocols, rarely
describe data analysis methods,
focus mostly on generic UX, and
contribute to the dimensionality
explosion mentioned earlier. Many
dimension papers do not attempt
to study complex, ongoing interac-
tions (often using screenshots or
studying very short interactions),
and some say very little about expe-
rience and in-depth reports on UX.
Unfortunately, few studies combine
qualitative and quantitative meth-
ods (or uniqueness and dimension
studies), and we find a sad lack of
references between the correspond-
ing groups of papers.
work on user-centered design and
usability research emphasizes that
we need to look at behavior—what
people do—rather than listen to
what they say and what they say
they do. The tension between these
two approaches needs to be understood much better.
Our review has characterized
current foci and blind spots in
UX research. Future work must
strengthen the interesting work
that this movement has generated. At the same time, we have
discussed issues that have tended
to be overlooked in UX research
and that are equally important to
address in future work.
1. Bargas-Avila, J. A. and Hornbaek, K. Old wine in
new bottles or novel challenges? A critical analysis
of empirical studies of user experience. Proc. of
the 2011 Annual Conference on Human Factors in
Computing Systems. ACM, New York, 2011, 2689-
2. Lang, P. J. Behavioral treatment and bio-behavioral assessment: Computer applications. In
Technology in Mental Health Care Delivery Systems.
J.B. Sidowski, J.H. Johnson, and T. A. Williams,
eds. Ablex Publishing, Norwood, NJ, 1980, 119-137.
3. McCarthy, J., Wright, P., Wallace, J., and
Dearden, A. The experience of enchantment
in human–computer interaction. Personal and
Ubiquitous Computing 10, 6 (2006), 369-378.
4. Barkhuus, L. and Rode, J. From mice to men– 24
years of evaluation in CHI. Proc. Alt CHI ‘07. ACM,
New York, 2007.
ABOUT THE AUTHORS
Javier Bargas-Avila holds a Ph.D.
in cognitive psychology. He has
published over 20 peer-reviewed
papers in HCI journals and con-
ferences on topics such as user
satisfaction, mental models in
website perception, visual aesthetics, and webform
usability. Since 2011 he has worked for Google,
where he currently focuses on You Tube internation-
alization, monetization, and analytics.
Kasper Hornbaek is a professor
with special duties in human-cen-
tered computing at the University
of Copenhagen; his B. Sc., M.Sc.,
and Ph. D. were also done at the
University of Copenhagen. His
research focuses on usability/
November + December 2012
© 2012 ACM 1072-5520/12/11 $15.00