tive than I expected.” So, what the
local study could not show is what
percentage of people give up trying to learn how the novel feedback
works if it is not sufficiently intuitive. The results of the local study
showed that tactile feedback reduces distraction if used as the only
modality—in other words, when the
screen is turned off. In fact, the in-the-large study showed that when
users enable vibration feedback,
they turn off the screen more often.
Hence, our local study showed the
value of encouraging users to turn
off the visual feedback, while the
in-the-large study showed that the
interface encourages this behavior.
Users from around the world
used the PocketNavigator in all
kinds of locations. Not only do
the participants represent a large
population, but the usage context
also represents diverse situations.
As, strictly speaking, an experi-
ment is valid only for the context in
which it was conducted, the local
study cannot be generalized beyond
Germans using the application in
the center of Oldenburg, the city
where the field study took place.
The in-the-large study provides the
external validity that was lacking
in the local study. However, we had
to trade external for internal valid-
ity. To avoid negative ratings, we
allowed users to turn the tactile
feedback on and off at any time.
Hence, each participant “selected”
her own experimental condition
(visual feedback only, tactile feed-
back only, both). This experimental
design is called quasi-experiment and
is known to be a threat to internal
validity. Nevertheless, the two stud-
ies together allow us to combine
results with both high internal
and external validity. This closes
the gap and allows us to conclude
that tactile interfaces, when used
in daily life, truly have a positive
impact on how much attention
people pay to the environment.
published via application stores.
This approach can be a viable tool
to supplement existing HCI research
practices. Research in-the-large can
provide our field with the external
validity that current practices fail
to provide and overcome the focus
on students that is evident nowadays. And it is not even limited to
the mobile domain. App stores for
desktop computers and TVs became
popular recently and are ready to be
exploited by HCI researchers.
ENDNOTES:
1. Bergstrom-Lehtovirta, J., Oulasvirta, A., and
Brewster, S. The effects of walking speed on target
acquisition on a touchscreen interface. Proc. of the
Inter. Conf. on Human-Computer Interaction with
Mobile Devices and Services. 2011.
2. McMillan, D., Morrison, A., Brown, O., Hall, M.,
and Chalmers, M. Further into the wild: Running
worldwide trials of mobile systems. Proc. of the
Conf. on Pervasive Computing. 2010.
3. Henze, N., Pielot, M., Poppinga, B., Schinke, T.,
and Boll, S. My app is an experiment: Experience
from user studies in mobile app stores. Inter.
Journal of Mobile Human Computer Interaction.
2012.
4. Park, Y. S., Han, S.H., Park, J., and Cho, Y.
Touch key design for target selection on a mobile
phone. Proc. of the Inter. Conf. on Human-Computer
Interaction with Mobile Devices and Services. 2008.
5. Henze, N., Rukzio, E., and Boll, S. 100,000,000
taps: Analysis and improvement of touch performance in the large. Proc. of the Inter. Conf. on
Human-Computer Interaction with Mobile Devices
and Services. 2011.
6. Pielot, M., Poppinga, B., Heuten, W., and Boll, S.
PocketNavigator: Studying tactile navigation systems in-situ. Proc. of the Conf. on Human Factors in
Computing Systems. 2012.
March + April 2013
ABOUT THE AUTHORS
Niels Henze is a senior researcher
at the University of Stuttgart,
Germany. He received his Ph.D.
from the University of Oldenburg
for his work on camera-based
mobile interaction for physical
objects. Henze is interested in large-scale studies
using mobile application stores, interlinking physi-
cal objects and digital information, and multimodal
interfaces.
interactions
External Validity for Mobile
HCI and Beyond
HCI research, and even mobile HCI
research, often focus on highly
controlled experiments conducted
in sterile environments. Often, we
develop new interaction techniques,
then we design the studies, and in
the end we recruit our colleagues,
students, and peers as participants.
One could argue that we as a discipline mainly investigate how HCI
researchers interact with digital
devices in laboratories. This kind
of problem is not new. It is common for psychology students to
participate in experiments at their
university to earn credit points for
their degree. Consequently, the
population that is best studied by
psychologists is psychology students, and the best-studied context
of use is the university lab. Mobile
HCI researchers risk a similar judgment from neighboring disciplines.
But it’s not for a lack of alternative
methods. A number of approaches
for studying a wider population
have been proposed in the past.
Online questionnaires, crowdsourcing, and games with a purpose
are some examples. However, they
hardly increase the external validity notably, are usually still limited
to small biased samples, and, most
important for us, do not address the
mobile domain.
For the two described large-scale
studies, we assume high external
validity while still maintaining
a reasonable internal validity.
Combined with a large sample, we
assume that individual differences
and contextual effects are factored
out. A growing body of work uses
the same approach. Mobile games,
apps, and widgets are used as the
experiment’s apparatus and are
Martin Pielot is an associate
researcher at Telefónica
Research, Barcelona, Spain. He
received his Ph.D. on conveying
spatial information via tactile dis-
plays in 2012. Pielot is interested
in large-scale studies as a means
to study his research on non-visual and ambient
interfaces in-situ and in the wild.
DOI: 10.1145/2427076.2427084
© 2013 ACM 1072-5520/13/03 $15.00