FORUM EVALUATION AND USABILITY
it. What client reads “additional testing
is needed” and doesn’t immediately
think “Sure it is, consultant who wants
more billable projects”? Because I was
trying to avoid sounding greedy or
slimy, I left the additional-testing
recommendation as a single sentence
instead of emphasizing the risk of our
poor understanding.
CONCLUSION
So, is some UX better than none? The
debate continues. While it’s true that
“even the worst test with the wrong
user will show you important things
you can do to improve your site” [ 7],
the people conducting and acting on
that test must also understand the
limitations along with the advantages.
They need to know they’re seeing
only some things, and that the risk of
skipping additional testing is that there
may still be important things missed.
For those of us who do understand, we
need to do everything we can to ensure
that project stakeholders don’t get a
false sense of security when practicing
Shadow UX.
Endnotes
1. Earthy, J. Usability maturity model:
Human-centeredness scale. Information
Engineering Usability Support Centres. 1998.
2. Sauro, J. The system usability scale;
https://measuringu.com/sus/
3. Comparing your net promoter score.
Net Promoter Net work; https://www.
netpromoter.com/compare/
4. Beyer, H. and Holtzblatt, K. Contextual
Design: Defining Customer-Centered
Systems. Morgan Kaufmann, 1997.
5. Righi, C. Building the conceptual
model and metaphor: The “3X3”. In
Design for People by People: Essays on
Usability. R. Branaghan, ed. Usability
Professionals’ Association, Chicago, IL,
2001, 213–219.
6. Spool, J.M. Fast path to a great UX –
increased exposure hours; https://articles.
uie.com/user_exposure_hours/
7. Krug, S. Don’t Make Me Think Revisited: A
Common Sense Approach to Web and Mobile
Usability. New Riders, 2014, 114.
Danielle Cooley has been working in
design research and strategy for more than 18
years with such companies as Hyundai, Pfizer,
Graco, Enterprise Rent-a-Car, Fidelity, and
→
danielle@dgcooley.com
way to the right answer. Further, in
retrospect, the interpretation of the
findings did not draw sufficient
attention to the risks posed by the
limits of the prototype. The response
“We couldn’t determine X, so we
should do more research” was often
given. Instead, “We couldn’t
determine X, so we should do more
research. If that doesn’t happen, the
project risks Y by not truly
understanding X” would have been
more appropriate.