photo came up in first place more
often. Luckily, in this case the
pattern was easy to spot. When
we condensed the offerings to
one package with a photograph
and the one without, the one
with a photograph received three
times the votes.
Perhaps an even more obvious
problem with the ranking technique, as well as the “like it best”
question, is the fact that while
a design is still in development,
what people chose can be far less
important than why. The design
isn’t final; it can be changed.
Understanding why provides
more food for the creative process
and information that a design
team can act on. Unfortunately,
the why issues are often ignored
or poorly addressed.
3. Don’t worry about the average
person. The average, although
a basic statistical concept
used in marketing, in itself
does not contain any informa-
tion on another basic statisti-
cal concept, probability.
Occurrences in nature, even
chance games like coin tossing,
center on probability. While the
chances of tossing heads four
times in a row is remote ( 1 in
16), toss enough coins and it will
definitely happen. Scientists typically employ the convention of
a 95 percent confidence level for
acceptance of a finding, or conversely, 1 out of 20 for rejection.
A hypothesis will be considered
“proven” if the chance of an “
accidental” occurrence of seeing a
specific result in a study is 5 percent or less (probability is typically notated as p<.05). Analyses
in marketing studies follow suit,
looking for 95 percent confidence
in the results.
In essence this means that if
you toss heads four times in a
row, science still considers the
coin “fair.” But toss heads five
times in a row (where the chance
is 1 in 32, or just more than 3 per-
cent) and that coin, or the person
tossing it, has just been placed
under suspicion. Although we
know it could happen, science set
its threshold at 5 percent, which
translates to science wanting to
be correct at least 19 out of 20
times. In doing so, it accepts a
range of possibilities.
Is There a Solution?
The solution can be discussed
in a “little picture/big picture”
way. A simple solution to the
first two issues, for example, is
to simply think in analog terms.
For example, ask people to rate
the products on a scale. Rate, not
rank. And be sure to ask why.
The results will be infinitely
more informative. Love, hate,
indifference, and ties will all be
spelled out. The design team
will get a sense of what they are
up against in terms of usability,
product performance, or consumer perception, which is the
ultimate purpose. Designers don’t
need closed-ended questions and
answers; they need to set direction and cultivate a point of view.
As for the average person,
one homogenized fictional
person will make a lovely pre-
sentation slide but in reality
will get us nowhere. It takes a
much more thorough under-
standing of people’s diver-
sity in needs and desires to
develop design parameters.
In a bigger picture, design
research needs to expand its
techniques to more fully under-
stand the potential of design. It’s
bad enough that some of these
marketing-based methods con-
tinue to be practiced in a rote
manner. (Delving into technical
discussions involving both logics
and statistics can bring many
people, in marketing and design,
far from their comfort level.)
But blindly applying market-
ing methods to design creates a
double whammy that should be
avoided at all costs.
January + February 2010