acceptance-rate data, which might also
be less meaningful, given the multi-re-view cycle common in journals.
Figure 5 suggests that citation count
for the top 10% of submitted papers follows a trend similar to that of the full
proceedings (F[ 1, 5165] = 149.5, p<.001),
with generally higher count for low acceptance rates. This correlation indicates that filtering alone does not fully
explain the correlation between citation
count and acceptance rate; other factors
(such as signaling) play a role.
figure 5. average citation count vs. acceptance rate within two years of publication,
top 10% of submissions.
Avg. Citation for Conferences
Avg. Citation for Journals
number of citations
Combining the results in Figures 3
and 5 provides further insight into the
relationship between acceptance rate
and citation count. For conferences
with acceptance rates over 20%, the citation numbers in the figures almost
consistently drop as the acceptance
rate increases, suggesting that in this
range, a higher acceptance rate makes
conferences lose out on citation count
not only for the conference but for its
best submitted papers. Either higher-quality papers are not submitted to
as frequently or those submitted are
not cited because readers do not explore the conferences as often as they
explore lower-acceptance-rate conferences to find them.
The case for conferences with acceptance rates below 20% is more intriguing. Note that the lower impact
of the 10%–15% group compared with
the 15%–20% group in Figure 5 is statistically significant (T = 3. 21,
p<.002). That is, the top-cited papers
from 15%–20%-acceptance-rate conferences are cited more often than
those from 10%–15% conferences. We
hypothesize that an extremely selective but imperfect (as review processes
always are) review process has filtered-out submissions that would deliver
impact if published. This hypothesis
matches the common speculation, including from former ACM President
David Patterson, that highly selective
conferences too often choose incremental work at the expense of innovative breakthrough work. 3
Alternatively, extremely low acceptance rates might discourage submissions by authors who dislike and avoid
competition or the perception of there
being a “lottery” among good papers for
10%–15% 15%–20% 20%–25% 25%–30% 30%–35% 35%–40% 40%–45% 45%–50% 50%–55% 55%–60%
acceptance Rate/Venue type
a few coveted publication slots. A third
explanation suggests that extremely
low acceptance rates have caused a conference proceedings to be of such limited focus that other researchers stop
checking it regularly and thus never cite
it. We consider all three to be plausible
explanations; intuitively, all would hurt
the impact of lower-acceptance-rate
conferences more than they would hurt
Our results have several implications:
First and foremost, computing researchers are right to view conferences
as an important archival venue and use
acceptance rate as an indicator of future impact. Papers in highly selective
conferences—acceptance rates of 30%
or less—should continue to be treated
as first-class research contributions
with impact comparable to, or better
than, journal papers.
Second, we hope to bring to the at-
tention of conference organizers and
program committees the insight that
conference selectivity does have a sig-
naling value beyond simply separat-
ing good work from bad. Adopting the
right selectivity level helps attract better
submissions and more citations. Accep-
tance rates of 15%–20% seem optimal
for generating the highest number of fu-
ture citations for both the proceedings
as a whole and the top papers submit-
ted, though we caution that this guide-
line is based on ACM-wide data, and
individual conferences should consider
their goals and the norms of their sub-
disciplines in setting target acceptance
rates. Furthermore, many conferences
have goals separate from generating
citations, and many high-acceptance-
rate conferences might do a better job
getting feedback to early ideas, support-
ing networking among attendees, and
bringing together different specialties.
This work was supported by National
Science Foundation grant IIS-0534939.
We thank our colleague John Riedl
of the University of Minnesota for his
valuable insights and suggestions.
1. garfield, E. Citation analysis as a tool in journal
evaluation. Science 178, 60 (nov. 1972), 471–479.
2. national Research Council. Academic Careers for
Experimental Computer Scientists and Engineers. u.S.
national Academy of Sciences Report, Washington, D.C.,
3. Patterson, D.A. The health of research conferences
and the dearth of big idea papers. Commun. ACM 47,
12 (Dec. 2004), 23–24.
Jilin Chen ( firstname.lastname@example.org) is a doctoral student in the
Department of Computer Science and Engineering at the
university of Minnesota, Twin Cities.
Joseph A. Konstan ( email@example.com) is
Distinguished Mcknight Professor and Distinguished
university Teaching Professor in the Department of
Computer Science and Engineering at the university of
Minnesota, Twin Cities.