and worst of all, really bad for the respect
that we command with other communi-
ties. SIGCHI needs to move away from
bolstering up conference publications. It
needs to use journals for journal stuff and
conferences for conference stuff.”a
He was wrong about the direction of
computer science, and at least prema-
ture in diagnosing CHI’s expiration. The
point, though, is that he saw the prob-
lem as an American problem, affecting
CHI but not European HCI.
Knock-on effects
This change in the complex ecology of
scholarly communication was followed
by a slow sequence of adjustments. ACM
and IEEE had considered conference
papers to be ephemeral, and expressly
allowed verbatim or minimally revised
republication in journals and transactions. With proceedings effectively
archived even before digital libraries
arrived, this policy was formally ended
early in the 1990s.
A significant consequence is that it
is increasingly difficult to evolve conference papers into journal articles. Publishers, editors, and reviewers expect
considerable new work, even new data,
to avoid a charge of self-plagiarism. Republishing the same work is undesirable, but we have inhibited the use of
review and revision cycles to clean up
conference papers, expand their literature reviews, and engage in the deeper
discussions that some feel are being
lost.
The pattern extends beyond systems.
I edited ACM Transactions on Computer-Human Interaction and serve on the editorial boards of Human-Computer Interaction, Interacting with Computers, and
ACM Computing Surveys. By my estimation, no more than 15% of the work published in highly selective HCI conferences later appears in journals. Journal
publication is not a prerequisite for being hired into leading research universities. Today, the major U.S. HCI journals
mostly publish work from Europe and
Asia, where conferences are less central.
Now let’s consider reviewing, a primary focus of discussion, before turning
to the impact of these changes on our
sense of community.
a Gilbert Cockton, email communication, Jan.
22, 2004.
When conferences
became archival,
it was natural to
focus on quality
and selectivity.
conference selectivity and
effects on Reviewing
In other fields, journals focus on identifying and improving research quality;
large conferences focus on community
building and community maintenance;
and workshops or small conferences
focus on member support through specialist discussions of work in progress.
This reflects Joseph McGrath’s division
of group activities into those focused on
production, team health, and member
support. 3
When conferences became archival,
it was natural to focus on quality and
selectivity. Even with authors preparing
camera-ready copy, the expense of producing a proceedings was proportional
to its page count. Libraries sales were
a goal prior to the emergence of digital libraries in the late 1990s. Libraries
were more likely to shelve thinner proceedings, and needed to be convinced
the work had lasting value. These pressures drove down conference acceptance rates. In my field they dropped
from almost 50% to 15% before settling
in a range, 20%–25%, that is acceptably
selective to academic colleagues yet not
brutally discouraging to authors, we
hope.
But it is discouraging to have submissions rejected. I know few if any people
who submit with no hope of acceptance.
In most fields, conferences accept work
in progress. It is also discouraging when
we see a paper presented and immortalized in the digital library that seems less
worthy than a paper that was rejected.
Review processes are noisy, and more so
as the reviewer pool expands to include
graduate students and others. Multidisciplinary fields, with diverse methodologies and priorities, deliver especially
random outcomes.
Previous commentaries emphasized
that caution and incrementalism fare
better than innovation and significance
in conference assessments. An incre-
mental advance has a methodology,
a literature review, and a rationale for
publication that were bulletproofed in
the papers it builds on. We try to chan-
nel papers to the most expert review-
ers in an area, but to them incremental
advances loom larger than they will to
others. With pressure to reject ~75%
and differing views of what constitutes
significant work, the minor flaws or
literature omissions that inevitably ac-
company novel work become grounds
for exclusion. And in a zero-sum game
where conference publication leads to
academic advancement, a novel paper
can be a competitive threat to people
and paradigms, echoing concerns
about journal conservatism in other
fields.
impact on community
A leading neuroscientist friend described the profession’s annual meeting as a “must-attend” event “where
people find out what is going on.” There
are 15,000 presentations and 30,000 attendees. The quality bar is low. It is a
community-building effort in a journal-oriented field.
In contrast, despite tremendous
growth in many CS specializations, attendance at many of our conferences
peaked or plateaued long ago. So has
SIG membership, as shown in the accompanying table. Conferences proliferate, dispersing organizational effort
and the literature, reducing a sense of
larger community.
In my field, CHI once had many vibrant communication channels—a
highly regarded newsletter, an interactive email discussion list, passionate
debates in the halls and business meetings of conferences, discussants for
paper sessions, and in the late 1990s
an active Web forum. All of them disap-