peared. The CHI conference gets more
submissions, but attendance peaked
years ago. When a small, relatively polished subset of work is accepted, what is
there to confer about?
High rejection rates undermine community in several ways. People don’t retain quite the same warm feeling when
their work is rejected. Without a paper
to give, some do not receive funding to
attend. Rejected work is revised and
submitted to other conferences, feed-ing conference proliferation, diverting
travel funding, and dispersing volunteer
efforts in conference management and
reviewing. In addition, high selectivity
makes it difficult for people in related
fields to break in—especially researchers from journal-oriented fields or countries who are not used to polishing conference submissions to our level.
A further consequence is that computer scientists do not develop the skills
needed to navigate large, community-building conferences. At our conferences, paper quality is relatively uniform
and the number of parallel sessions
small, so we can quickly choose what to
attend. In contrast, randomly sampling
sessions at a huge conference with 80%
acceptance leads us to conclude that it
is a junk conference. Yet with a couple
hours of preparation, combing the many
parallel sessions for topics of particular
interest, speakers of recognized esteem,
and best paper nominations, and then
planning meetings during some sessions, one can easily have as good an
experience as at a selective conference.
But it took me a few tries to discover this.
Courtesy of Moore’s Law, our field enjoys a constant flow of novelty. If existing
venues do not rapidly shift to accommodate new directions, other outlets will
appear. Discontinuities can be abrupt.
Our premier conference for many years,
the National Computer Conference, collapsed suddenly two decades ago, bringing down the American Federation of Information Processing Societies (AFIPS),
then the parent organization of ACM
and IEEE. Over half of all ACM A.M.
Turing Award winners published in the
AFIPS conferences. Most of those published single-authored papers. Yet the
AFIPS conference proceedings disappeared, until they were recently added to
the ACM Digital Library. The field moved
on—renewal is part of our heritage. But
perhaps we can smooth the process.
Having turned our conferences into
journals, we must find new ways to
strengthen community. Rolling back
the clock to the good old heyday of journals, ignoring changes wrought by technology and time, seems unlikely to happen. For one thing, it would undermine
careers built on conference publication.
More to the point, computer science in
the U.S. responded first to technologies
that enable broad dissemination and
archiving. Other countries are now following; other disciplines will also adapt,
one way or another. Instead of looking
back, we can develop new processes and
technologies to address challenges that
emerged from exploiting the technologies of the 1980s.
With storage costs evaporating, we
could separate quality determination
from participation by accepting most
conference submissions for presentation and online access, while distinguishing ~25% as “Best Paper Nominations.” Making a major conference
more inclusive could pull participation
back from spin-off conferences.
A more radical possibility is inspired
by the revision history and discussion
pages of Wikipedia articles. Authors
could maintain the history of a project as it progresses through workshop,
conference, and journal or other high-er-level accreditation processes. Challenges would have to be overcome, but
such an approach might ameliorate
reviewer load and multiple publication
burdens—or might not.
We are probably not approaching the
bottom of a “death spiral.” But when
AFIPS and the National Computer Conference collapsed, the transition from
profitable success to catastrophe was
remarkably rapid. Let’s continue this
discussion and keep history from repeating.
1. Birman, K. and Schneider, F. B. Program committee
overload in systems. Commun. ACM 52, 5 (May 2009),
2. Fortnow, L. Time for computer science to grow up.
Commun. ACM 52, 8 (Aug. 2009), 33–35.
3. McGrath, J.E. Time, interaction, and performance
(TIP): A theory of groups. Small Group Research 22, 2
4. Peer review in computing research. CRA Conference
at Snowbird, July 19, 2010; http://www.cra.org/
Jonathan Grudin ( firstname.lastname@example.org) is a member
of the Adaptive Systems and Interaction Group at
Microsoft Research in Redmond, WA.
Copyright held by author.
9th UseniX Conference on File
and storage Technologies,
san Jose, Ca,
Contact: John Wilkes,
symposium on interactive
3d graphics and games,
san Francisco, Ca,
Contact: Michael garland,
First aCM Conference on data
and application security and
san antonio, TX,
Contact: ravinderpal s. sandhu,
santa Clara, Ca,
Contact: Wu-Chi Feng,
indian software engineering
Contact: arun g. Bahulkar,
international Conference &
Workshop on emerging Trends
Mumbai, Maharashtra india,
Contact: Poorva g. Waingankar,
The 42nd aCM Technical
symposium on Computer
Contact: Thomas Contina,
second Joint WosP/siPe W
international Conference on
sponsored: sigMe TriCs,
Contact: samuel Kounev,