for next year. Some program committee members remarked that for the
first time, the review process left them
energized rather than drained.
Conference attendance increased
80% to a record 657. Over one-third of
attendees responded to a post-confer-ence survey. Of those expressing an
opinion, 94% felt the new process improved the conference. The community
building that was once the raison d’être
of conferences was arguably strengthened. Authors and reviewers sent many
positive comments, exemplified by the
postcard shown earlier in this column.
Challenges. Some reviewers found
that multiple reviewing sessions for
one conference were taxing, especially
given that reviewing was in the summer
when vacations were scheduled. An operational challenge was that despite
good intentions, PC members and reviewers with strongly entrenched habits born of years of committee service
could find it difficult to adjust to new
ways of working.
The greatest benefit of the revise-and-resubmit approach is that rather
than rejecting many papers that need
to be polished or fixed, reviewers can
coach the authors to produce higher-quality versions fit for publication.
Ironically, a challenge to employing
this approach is the perception that a
higher acceptance rate signals low conference quality. Despite a widespread
view that quality had risen, some researchers fear that external referees
will look unfavorably on the acceptance rate. Many senior researchers
discount the selectivity = quality equation, but it is built into the assessment
practices of some universities and has
traction among junior researchers who
like to think that decisions are based
on visible, objective data. Acceptance
rate as a signifier of quality has a sound
historical basis.
In 1999, some peer-reviewed con-
ferences were formally recognized as
sources of high-quality computer sci-
ence research in the U.S. 6 However, not
all conferences in the U.S. and few else-
where stressed polished research; they
remained more inclusive, providing au-
thors with feedback on work in progress
toward journal publication. Acceptance
rate—rejecting 75% or more of its mem-
bers’ proffered work—signaled that a
conference emphasized quality over
community-building inclusiveness.
acceptance rate
as a signifier of
quality has a sound
historical basis.
Another concern is that many people enjoy single-track conferences.
Raising the acceptance rate requires
adding tracks, lengthening a conference, or allocating less presentation
time to papers. CSCW had already
moved on from being single-track.
As the field grew and specializations
developed, trade-offs arose. With
more submissions and more varied
submissions, rejection rates were
pushed up, reviewing became more
stressful and less uniform, and incremental advances competed for space
with new ideas. CSCW shifted to
multiple tracks and a tiered program
committee. Other conferences have
maintained a single track and driven
acceptance rates into single or low
double digits. Many reports of conference stress published in
Communicationsd come from these fields. High
rejection rates foster disaffection and
drive authors to other conferences,
dispersing the literature and undermining the sense of community that
the single track initially created.
Finally, the perennial large-confer-ence challenge of matching reviewers
to submissions. We devised a comprehensive set of keyword/topic areas.
We let associate chairs bid on submissions. Nevertheless, a match based on
topic often proves to be poor due to
differences in preferred method, theoretical orientation, or other factors.
Some reviewers, who might be called
‘annihilators,’ consistently rate papers lower than others who handle the
same submissions. Some reviewers
are ‘Santa Clauses.’ Statistical normalization does not fully compensate—a
submission does not get the essential
advocate by adjusting the scores of
annihilators. Others see some merit
and some weakness in everything and
rate everything borderline. Add Anderson’s observation that differences
in quality are very small over a broad
range of submissions, and luck in
reviewer assignment can be a larger
factor in outcomes than submission
quality. Next, we review other proposals and experiments to improve conference management.
d Links to Communications commentaries,
Snowbird panels, WOWCS’08 notes, and the
Dagstuhl workshop are at http://research.mi-crosoft.com/~jgrudin/CACMviews.pdf.