the Communications Web site, http://cacm.acm.org,
features more than a dozen bloggers in the BLoG@cacm
community. in each issue of Communications, we’ll publish
selected posts or excerpts.
follow us on twitter at http://twitter.com/blogcacm
Rethinking the systems
Tessa Lau launches a discussion about the acceptance criteria
for HCI systems papers at CHI, UIST, and other conferences.
tessa Lau “What makes a Good hci systems Paper?” http://cacm.acm.org/ blogs/blog-cacm/86066 There has been much discussion on Twitter, Facebook, and in blogs about problems with the re- viewing system for HCI systems papers
(see James Landay’s blog post, “I give
up on CHI/UIST” and the comment
thread at http://dubfuture.blogspot.
Unlike papers on interaction methods or new input devices, systems are
messy. You can’t evaluate a system
using a clean little lab study, or show
that it performs 2% better than the last
approach. Systems often try to solve a
novel problem for which there was no
previous approach. The value of these
systems might not be quantified until they are deployed in the field and
evaluated with large numbers of users.
Yet doing such an evaluation incurs a
significant amount of time and engineering work, particularly compared
to non-systems papers. The result,
observed in conferences like CHI and
UIST, is that systems researchers find
it very difficult to get papers accepted.
Reviewers reject messy systems papers
that don’t have a thorough evaluation
of the system, or that don’t compare
the system against previous systems
(which were often designed to solve a
At CHI 2010 there was an ongoing
discussion about how to fix this problem. Can we create a conference/pub-lishing process that is fair to systems
work? Plans are afoot to incorporate
iterative reviewing into the systems paper review process for UIST, giving authors a chance to have a dialogue with
reviewers and address their concerns
However, I think the first step is to
define a set of reviewing criteria for
HCI systems papers. If reviewers don’t
agree on what makes a good systems
paper, how can we encourage authors
to meet a standard for publication?
Here’s my list:
˲ ˲ A clear and convincing description
of the problem being solved. Why isn’t
current technology sufficient? How
many users are affected? How much
does this problem affect their lives?
˲ ˲ How the system works, in enough
detail for an independent researcher
to build a similar system. Due to the
complexities of system building, it is
often impossible to specify all the pa-
rameters and heuristics being used
within a 10-page paper limit. But the
paper ought to present enough detail
to enable another researcher to build
a comparable, if not identical, system.
I’d like to second your first recommendation.
I’ve reviewed a number of systems papers
that do not provide a sufficiently compelling
motivation or use case—why should I or
anyone care about this system? Without
this, the paper often represents technology
in search of a problem.
Now, having read Don Norman’s
provocative article “Technology First, Needs
Last: The Research-Product Gulf” in the