domains: An empirical comparison of two
methodologies. Journal of Organizational and End
User Computing 21, 4 (2009), 21–40.
8. Dunahee, M., Lebo, H. et al. The World Internet Project
International Report, Sixth Edition. University of
Southern California Annenberg School Center for the
Digital Future, Los Angeles, CA, 2016.
9. Field, A. and Hole, G. How to Design and Report
Experiments. SAGE Publications, Thousand Oaks,
10. Hill, T., Smith, N.D., and Mann, M. F. Communicating
innovations: Convincing computer-phobics to adopt
innovative technologies. NA-Advances in Consumer
Research 13 (1986), 419–422.
11. Kanij, T., Merkel, R., and Grundy, J. An empirical
investigation of personality traits of software testers.
In Proceedings of the IEEE/ACM Eighth International
Workshop on Cooperative and Human Aspects of
Software Engineering (Florence, Italy, May 18).
IEEE, 2015, 1–7.
12. Malhotra, N.K., Kim, S.S., and Agarwal, J. Internet
users’ information privacy concerns: The construct,
the scale, and a causal model. Information Systems
Research 15, 4 (Dec. 2004), 336–355.
13. Mäntylä, M.V., Itkonen, J., and Iivonen, J. Who tested
my software? Testing as an organizationally crosscutting activity. Software Quality Journal 20, 1 (Mar.
14. Merkel, R. and Kanij, T. Does the Individual Matter
in Software Testing? Technical Report. Centre for
Software Analysis and Testing, Swinburne University
of Technology, Melbourne, Australia, May 2010;
15. Murphy, C. Where’s Japan? Consumer and Shopper
Insights (Sept. 2011). McKinsey & Company, New York.
16. Ono, H. and Zavodny, M. Digital inequality: A five-country comparison using microdata. Social Science
Research 36, 3 (Sept. 2007), 1135–1155.
17. Pan, J. Software testing. Dependable Embedded
Systems 5 (Spring 1999); https://pdfs.semanticscholar.
18. PassMark Software. CPU Benchmarks; https://www.
19. Perino, J. 6 Different Types of Betabound Testers:
Which Are You? Sept. 11, 2014; http://www.betabound.
20. Sauer, J., Seibel, K., and Rüttinger, B. The influence of
user expertise and prototype fidelity in usability tests.
Applied Ergonomics 41, 1 (Jan. 2010), 130–140.
21. Sheehan, K.B. Toward a typology of Internet users and
online privacy concerns. The Information Society 18,
1 (2002), 21–32.
22. u Test, Inc. The Future of Beta Testing: 6 Tips for
Better Beta Testing. White Paper. Southborough,
MA, Sept. 2012; http://www.informationweek.com/
23. Venkatesh, V., Morris, M.G., Davis, G.B., and Davis, F.D.
User acceptance of information technology: Toward a
unified view. MIS Quarterly 27, 3 (Sept. 2003), 425–478.
24. Wallace, S. and Yu, H.-C. The effect of culture on
usability: Comparing the perceptions and performance
of Taiwanese and North American MP3 player users.
Journal of Usability Studies 4, 3 (May 2009), 136–146.
Vlasta Stavova ( email@example.com) is a Ph.D.
candidate in the Centre for Research on Cryptography
and Security in the Faculty of Informatics at Masaryk
University, Brno, Czech Republic.
Lenka Dedkova ( firstname.lastname@example.org) is a postdoc
researcher in the Institute for Research on Children, Youth
and Family in the Faculty of Social Sciences at Masaryk
University, Brno, Czech Republic.
Martin Ukrop ( email@example.com) is a Ph. D. candidate
in the Centre for Research on Cryptography and Security
in the Faculty of Informatics at Masaryk University, Brno,
Vashek Matyas ( firstname.lastname@example.org) is a professor in the
Centre for Research on Cryptography and Security in the
Faculty of Informatics at Masaryk University, Brno, Czech
Copyright held by the authors.
Publication rights licensed to ACM. $15.00
options for validating participants’ answers. Despite the thorough cleaning,
some flawed questionnaire answers
could have remained. Also, writing the
questionnaire in English could have
discouraged users not proficient in
The datasets of participating beta
testers and regular users included different numbers of participants and
were collected at different times. This
could have influenced the number of
participants using, say, Windows 10, as
the study was conducted during a free-upgrade period. Moreover, the research was based on only the English
versions of the software, missing customers who prefer other languages.
Working with security-software firm
ESET, we conducted a large-scale
comparison between beta testers
and regular users of ESET’s main
product. We focused on technological aspects of ESET’s user demographics and nearly 600,000 users’
self-reported computer self-efficacy.
The participating beta testers were
early adopters of newer operating sys-
tems, and their distribution was signif-
icantly skewed toward the most cur-
rent versions at the time, despite
having limited time for Windows 10
migration. They also tended to be
younger, more often male, and per-
ceived themselves as more skilled with
their computers and also more often I T
technicians, supporting the “beta tes-
ters as geeks” stereotype. However,
their hardware—platform, CPU perfor-
mance, and RAM size—was similar to
that of regular users, somewhat contra-
dicting the popular image.
We found a striking difference in
their countries of origin; from the top
10 most represented, only three appeared in both subsamples.
Overall, study beta testers represented regular users reasonably well,
and we did not observe a regular-user
segment that would be underrepresented among beta testers. ESET’s approach of not filtering beta testers and
“the more testers the better” followed
by analyses of selected observed differences seems sufficient for developing
its software products. For large international companies able to attract
large numbers of beta testers, this may
be the most efficient approach. However, for smaller, local, or less-well-es-tablished companies, this approach
would probably not yield representative outcomes and could even shift development focus in a wrong direction. 6
For more, including a video, see
We thank Masaryk University (
project MUNI/M/1052/2013) and Miro-slav Bartosek for support and to the
anonymous reviewers and Vit Bukac
for valuable feedback.
1. Biffl, S., Aurum, A., Boehm, B., Erdogmus, H., and
Grünbacher, P. Value-Based Software Engineering.
Springer Science & Business Media, Berlin,
2. Chinn, M. D. and Fairlie, R. W. ICT use in the developing
world: An analysis of differences in computer and
Internet penetration. Review of International
Economics 18, 1 (2010), 153–167.
3. Cohen, J. Statistical Power and Analysis for the
Behavioral Sciences, Second Edition. Lawrence
Erlbaum Associates, Inc., 1988.
4. Compeau, D.R. and Higgins, C. A. Computer self-efficacy: Development of a measure and initial test.
MIS Quarterly 19, 2 (June 1995), 189–211.
5. Cuervo, M. R. V. and Menéndez, A. J. L. A multivariate
framework for the analysis of the digital divide:
Evidence for the European Union- 15. Information &
Management 43, 6 (Sept. 2006), 756–766.
6. Dolan, R. J. and Matthews, J.M. Maximizing the
utility of customer product testing: Beta test design
and management. Journal of Product Innovation
Management 10, 4 (Sept. 1993), 318–330.
7. Downey, J. P. and Rainer Jr., R. K. Accurately
determining self-efficacy for computer application
Our research produced the following
actionable takeaways for software
Using data. Data you can collect
can help you learn who your users
and beta testers are; consider
country of origin, software and
hardware configuration, and basic
Selecting testers. The fewer testers
you have, the pickier you should be
about their selection;
Identifying usability issues.
When testing international products,
ensure beta testers are culturally
representative of regular users to help
identify potential localization and
cultural usability issues; and
Most important, testers should be
representative of regular users; keep
checking that this is the case or
pursue additional rigorous analyses
to reach the most credible and
applicable conclusions possible.