tion “Would you accept this paper if it
was entirely up to you?” toward a more
constructive role of characterizing—
and indeed, profiling—the paper under
submission. Put differently, besides
suggestions for improvement to the authors, the reviewers attempt to collect
metadata about the paper that is used
further down the pipeline to decide
the most suitable publication venue. In
principle, this would make it feasible to
decouple the reviewing process from individual venues, something that would
also enable better load balancing and
46 In such a system, authors and
reviewers would be members of some
central organization, which has the authority to assign papers to multiple publication venues—a futuristic scenario,
perhaps, but it is worth thinking about
the peculiar constraints that our current
conference- and journal-driven system
imposes, and which clearly leads to a
sub-optimal situation in many respects.
The computational methods we described in this article have been used
to support other academic processes
outside of peer review, including a
personalized conference planner app
for delegates,g an organizational profiler36 and a personalized course recommender for students based on their
41 The accompanying
table presented a few other possible
future directions for computation support of academic peer review itself. We
hope that they, along with this article,
stimulate our readers to think about
ways in which the academic peer review process—this strange dance in
which we all participate in one way or
another—can be future-proofed in a
sustainable and scalable way.
1. André, P., Zhang, H., Kim, J., Chilton, L. B., Dow, S. P.
and Miller, R. C. Community clustering: Leveraging
an academic crowd to form coherent conference
sessions. In Proceedings of the First AAAI Conference
on Human Computation and Crowdsourcing (Palm
Springs, CA, Nov. 7–9, 2013). B. Hartman and E.
Horvitz, ed. AAAI, Palo Alto, CA.
2. Balog, K., Azzopardi, L. and de Rijke, M. Formal models
for expert finding in enterprise corpora. In Proceedings
of the 29th Annual International ACM Conference on
Research and Development in Information Retrieval
(2006). ACM, New York, NY, 43–50.
3. Benferhat, S. and Lang, J. Conference paper
assignment. International Journal of Intelligent
Systems 16, 10 (2001), 1183–1192.
4. Blei, D.M, Ng, A. Y. and Jordan, M.I. Latent dirichlet
allocation. J. Mach. Learn. Res. (Mar. 2003), 993–1022.
5. Bornmann, L., Bowman, B., Bauer, J., Marx, W.,
Schier, H. and Palzenberger, M. Standards for using
bibliometrics in the evaluation of research institutes.
Next Generation Metrics, 2013.
6. Boxwala, A.A., Dierks, M., Keenan, M., Jackson, S.,
Hanscom, R., Bates, D. W. and Sato, L. Review paper:
Organization and representation of patient safety
data: Current status and issues around generalizability
and scalability. J. American Medical Informatics
Association 11, 6 (2004), 468–478.
7. Brixey, J., Johnson, T. and Zhang, J. Evaluating a medical
error taxonomy. In Proceedings of the American Medical
Informatics Association Symposium, 2002.
8. Charlin, L. and Zemel, R. The Toronto paper matching
system: An automated paper-reviewer assignment
system. In Proceedings of ICML Workshop on Peer
Reviewing and Publishing Models, 2013.
9. Charlin, L., Zemel, R. and Boutilier, C. A framework
for optimizing paper matching. In Proceedings of the
27th Annual Conference on Uncertainty in Artificial
Intelligence (Corvallis, OR, 2011). AUAI Press, 86–95.
10. De Roure, D. Towards computational research objects.
In Proceedings of the 1st International Workshop
on Digital Preservation of Research Methods and
Artefacts (2013). ACM, New York, N Y, 16–19.
11. Deng, H., King, I. and Lyu, M.R. Formal models for expert
finding on DBLP bibliography data. In Proceedings of
the 8th IEEE International Conference on Data Mining
(2008). IEEE Computer Society, Washington, D.C., 163–172.
12. Devedzić, V. Understanding ontological engineering.
Commun. ACM 45, 4 (Apr. 2002), 136–144.
13. Di Mauro, N., Basile, T. and Ferilli, S. Grape: An expert
review assignment component for scientific conference
management systems. Innovations in Applied Artificial
Intelligence. LNCS 3533 (2005). M. Ali and F. Esposito,
eds. Springer, Berlin Heidelberg, 789–798.
14. Dumais S. T. and Nielsen, J. Automating the assignment
of submitted manuscripts to reviewers. In Proceedings
of the 15th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval
(1992). ACM, New York, NY, 233–244.
15. Fang, H. and Zhai, C. Probabilistic models for
expert finding. In Proceedings of the 29th European
Conference on IR Research (2007). Springer-Verlag,
Berlin, Heidelberg, 418–430.
16. Ferilli, S., Di Mauro, N., Basile, T., Esposito, F.
and Biba, M. Automatic topics identification for
reviewer assignment. Advances in Applied Artificial
Intelligence. LNCS 4031 (2006). M. Ali and R.
Dapoigny, eds. Springer, Berlin Heidelberg, 721–730.
17. Flach, P. Machine Learning: The Art and Science of
Algorithms That Make Sense of Data. Cambridge
University Press, 2012.
18. Flach, P.A., Spiegler, S., Golénia, B., Price, S., Herbrich, J.G.R.,
Graepel, T. and Zaki, M. J. Novel tools to streamline the
conference review process: Experiences from SIGKDD’09.
SIGKDD Explorations 11, 2 (Dec. 2009), 63–67.
19. Garg, N., Kavitha, T., Kumar, A., Mehlhorn, K., and
Mestre, J. Assigning papers to referees. Algorithmica
58, 1 (Sept. 2010), 119–136.
20. Goldsmith, J. and Sloan, R.H. The AI conference paper
assignment problem. In Proceedings of the 22nd AAAI
Conference on Artificial Intelligence (2007).
21. Harnad, S. Open access scientometrics and the U.K.
research assessment exercise. Scientometrics 79, 1
(Apr. 2009), 147–156.
22. Hettich, S. and Pazzani, M.J. Mining for proposal
reviewers: Lessons learned at the National Science
Foundation. In Proceedings of the 12th ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining (2006). ACM, New York, NY, 862–871.
23. Jennings, C. Quality and value: The true purpose of
peer review. Nature, 2006.
24. Karimzadehgan, M. and Zhai, C. Integer linear
programming for constrained multi-aspect committee
review assignment. Inf. Process. Manage. 48, 4 (July
25. Karimzadehgan, M., Zhai, C. and Belford, G. Multi-aspect expertise matching for review assignment.
In Proceedings of the 17th ACM Conference on
Information and Knowledge Management (2008).
ACM, New York, N Y 1113–1122.
26. Kou, N.M., U, L. H. Mamoulis, N. and Gong, Z. Weighted
coverage based reviewer assignment. In Proceedings of
the 2015 ACM SIGMOD International Conference on
Management of Data. ACM, New York, NY, 2031–2046.
27. Langford, J. and Guzdial, M. The arbitrariness of
reviews, and advice for school administrators.
Commun. ACM 58, 4 (Apr. 2015), 12–13.
28. Lawrence, P.A. The politics of publication. Nature 422
(Mar. 2003), 259–261.
29. Ley, M. The DBLP computer science bibliography:
Evolution, research issues, perspectives. In
Proceedings of the 9th International Symposium on
String Processing and Information Retrieval (London,
U. K., 2002). Springer-Verlag, 1–10.
30. Liu, X., Suel, T. and Memon, N. A robust model for
paper reviewer assignment. In Proceedings of the 8th
ACM Conference on Recommender Systems (2014).
ACM, New York, NY, 25–32.
31. Long, C., Wong, R.C., Peng, Y. and Ye, L. On good and
fair paper-reviewer assignment. In Proceedings of
the 2013 IEEE 13th International Conference on Data
Mining (Dallas, TX, Dec. 7–10, 2013), 1145–1150.
32. Mehlhorn, K., Vardi, M. Y. and Herbstritt, M. Publication
culture in computing research (Dagstuhl Perspectives
Workshop 12452). Dagstuhl Reports 2, 11 (2013).
33. Meyer, B., Choppy, C., Staunstrup, J. and van Leeuwen,
J. Viewpoint: Research evaluation for computer
science. Commun. ACM 52, 4 (Apr. 2009), 31–34.
34. Mimno, D. and McCallum, A. Expertise modeling for
matching papers with reviewers. In Proceedings of
the 13th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining. ACM, New
York, NY, 2007, 500–509.
35. Minka, T. Expectation propagation for approximate
Bayesian inference. In Proceedings of the 17th
Conference in Uncertainty in Artificial Intelligence.
J.S. Breese and D. Koller, Eds. Morgan Kaufmann,
36. Price, S. and Flach, P.A. Mining and mapping the
research landscape. In Proceedings of the Digital
Research Conference. University of Oxford, Sept. 2013.
37. Price, S., Flach, P. A., Spiegler, S., Bailey, C. and Rogers,
N. SubSift Web services and workflows for profiling and
comparing scientists and their published works. Future
Generation Comp. Syst. 29, 2 (2013), 569–581.
38. Pritchard, A. et al. Statistical bibliography or
bibliometrics. J. Documentation 25, 4 (1969), 348–349.
39. Rodriguez, M. A and Bollen, J. An algorithm to
determine peer-reviewers. In Proceedings of the 17th
ACM Conference on Information and Knowledge
Management. ACM, New York, NY, 319–328.
40. Sidiropoulos, N.D. and Tsakonas, E. Signal processing
and optimization tools for conference review and
session assignment. IEEE Signal Process. Mag. 32, 3
41. Surpatean, A., Smirnov, E.N. and Manie, N. Master
orientation tool. ECAI 242, Frontiers in Artificial
Intelligence and Applications. L.De Raedt, C. Bessière,
D. Dubois, P. Doherty, P. Frasconi, F. Heintz, and P. J. F.
Lucas, Eds. IOS Press, 2012, 995–996.
42. Tang, W. Tang, J., Lei, T., Tan, C., Gao, B. and Li, T.
On optimization of expertise matching with various
constraints. Neurocomputing 76, 1 (Jan. 2012), 71–83.
43. Tang, W., Tang, J. and Tan, C. Expertise matching via
constraint-based optimization. In Proceedings of the
2010 IEEE/WIC/ACM International Conference on Web
Intelligence and Intelligent Agent Technology (Vol 1).
IEEE Computer Society, Washington, DC, 2010, 34–41.
44. Taylor, C. J. On the optimal assignment of conference
papers to reviewers. Technical Report MS-CIS-08-30,
Computer and Information Science Department,
University of Pennsylvania, 2008.
45. Terry, D. Publish now, judge later. Commun. ACM 57, 1
(Jan. 2014), 44–46.
46. Vardi, M. Y. Scalable conferences. Commun. ACM 57, 1
(Jan. 2014), 5.
47. Yimam-Seid, D. and Kobsa, A. Expert finding systems
for organizations: Problem and domain analysis and
the DEMOIR approach. J. Organizational Computing
and Electronic Commerce 13 (2003).
Simon Price ( firstname.lastname@example.org) is a Visiting
Fellow in the Department of Computer Science
at the University of Bristol, U.K., and a data scientist
Peter A. Flach ( email@example.com) is a professor
of artificial intelligence in the Department of Computer
Science at the University of Bristol, U.K. and editor-in-chief
of the Machine Learning journal.
Copyright held by owners/authors.
Watch the authors discuss
their work in this exclusive