of the features used. Overall, such
modular development allows, in
many cases, to separate the development of none-game-theoretic aspects
from those which rely on game theory.
The work we surveyed (for example,
on nondeterministic ranking functions2, 3 provides some initial guidelines
about how one would devise solutions
that account for incentives. We believe
the suite of guidelines will broaden
with more work in these directions, as
is the case, for example, for neuro IR—
current state-of-the-art neuro IR algorithms are significantly different from
those proposed a few years ago given
the experience which has been gained.
The spectrum of challenges that
remain to be addressed is huge. For
example, accounting for the interactions between different types of incentivized players involved—for example, publishers and users of a
search engine. Indeed, users of the
search engine are not only incentivized consumers of retrieved items,
but also interested parties that affect
the search engine’s behavior (
specifically, its ranking function) via their
interactions. That is, in commercial
search engines, users’ clicks on retrieved results are treated as implicit
relevance feedback. 19 Query reformulations are an additional implicit relevance feedback signal. Thus, modeling user interactions with the search
system is an important part of the
analysis of any search engine and the
design of retrieval algorithms. In our
setting, the emerging challenge is
modeling simultaneously, using a
game theoretic approach, the effects
of the strategic behavior of interested
users with that of interested publishers. For example, to contrast ranking
mechanisms in terms of utility and
social welfare, we can potentially use
Bayesian games, rather than perfect
information games used in our initial
work, 2, 3 wherein clicks and their dynamics are employed to devise relevance estimates.
There are additional “micro-level”
challenges; examples include devising
content-based and other (for example,
hyperlink-based) relevance estimates
that account for incentivized publish-
ers; moving from list-based effectiveness
evaluation to amortized and equilibri-
um-based evaluation3, 7 given the dynam-
ics of the retrieval setting; and, adapting
specialized retrieval approaches (for ex-
ample, those based on fusion or federa-
tion) to incentives-based settings.
Our study is part of a more general
attack on applying mechanism design
for data science (see http://mdds.net.
technion.ac.il). Indeed, while algorithmic game theory had remarkable success in addressing incentives in variety
of settings, such as a variety of fundamental resource allocation settings
(for example, auctions), and congestion control, 29 among others, both the
repertoire of games and the techniques
to be employed need to be expanded
in order to deal with data science algorithms. While we referred mainly to
search and recommendation systems
in this article, the site mentioned here
shows applications to prediction, regression, segmentation, diffusion, and
others. Two of the major techniques
used are around approximate mechanism design without money30 and dueling algorithms, 16 both of which are
beyond the scope of this article.
1. Bahar, G., Smorodinsky, R., and Tennenholtz, M.
Economic recommendation systems: One page
abstract. In Proceedings of EC, 2016, 757–757.
2. Ben-Basat, R., Tennenholtz, M., and Kurland, O.
The probability ranking principle is not optimal in
adversarial retrieval settings. In Proceedings of
ICTIR, 2015, 51–60.
3. Ben-Basat, R., Tennenholtz, M., and Kurland, O. A
game theoretic analysis of the adversarial retrieval
setting. J. Artif. Intell. Res. 60 (2017), 1127–1164.
4. Ben-Porat, O. and Tennenholtz, M. Best response
regression. In Proceedings of NIPS, 2017, 1498–1507.
5. Ben-Porat, O. and Tennenholtz, M. A Game-theoretic
approach to recommendation systems with strategic
content providers. In Proceedings of NeurIPS, 2018,
6. Bendersky, M., Croft, W.B., and Diao, Y. Quality-biased
ranking of Web documents. In Proceedings of WSDM,
7. Biega, A. J., Gummadi, K. P., and Weikum, G. Equity of
attention: Amortizing individual fairness in rankings. In
Proceedings of SIGIR, 2018, 405–414.
8. Bubeck, S. and Cesa-Bianchi, N. Regret analysis
of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in
Machine Learning 5, 1 (2012), 1–122; https://doi.
9. Che, Y. K. and Horner, J. O. Optimal Design for Social
Learning. Cowles Foundation Discussion Paper No.
10. Cramton, P., Shoham, Y., and Steinberg, R. An overview
of combinatorial auctions. SIGecom Exchanges 7, 1
11. Dalvi, N., Domingos, P., Mausam, Sanghai, S., and
Verma, D. Adversarial classification. In Proceedings of
KDD, 2004, 99–108.
12. Frazier, P.I., Kempe, D., Kleinberg, J. M., and Kleinberg,
R. Incentivizing exploration. In Proceedings of EC,
13. Goodfellow, I.J., Shlens, J., and Szegedy, C. Explaining
and harnessing adversarial examples. In Proceedings
of ICLR, 2015.
14. Goren, G., Kurland, O., Tennenholtz, M., and Raiber,
F. Ranking robustness under adversarial document
manipulations. In Proceedings of SIGIR, 2018, 395–404.
15. Gyöngyi, Z. and Garcia-Molina, H. Web spam taxonomy.
In Proceedings of AIRWeb, 2005, 39–47.
16. Immorlica, N., Kalai, A. T., Lucier, B., Moitra, A.,
Postlewaite, A., and Tennenholtz, M. Dueling
algorithms. In Proceedings of S TOC, 2011, 215–224.
17. Izsak, P., Raiber, F., Kurland, O., and Tennenholtz, M.
The search duel: A response to a strong ranker. In
Proceedings of SIGIR, 2014, 919–922.
18. Jardine, N. and van Rijsbergen, C.J. The use of
hierarchic clustering in information retrieval.
Information Storage and Retrieval 7, 5 (1971),
19. Joachims, T., Granka, L., Pan, B., Hembrooke, H., and
Gay, G. Accurately interpreting clickthrough data as
implicit feedback. In Proceedings of SIGIR, 2005,
20. Jones, K.S., Walker, S., and Robertson, S.E. A
probabilistic model of information retrieval:
development and comparative experiments—Part 1.
Information Processing and Management 36, 6 (2000),
21. Koutsoupias, E. and Papadimitriou, C. Worst-case
equilibria. In Proceedings of S TACS, 1999.
22. Kremer, I., Mansour, Y., and Perry, M. Implementing
the wisdom of the crowd. J. Political Economy 122
23. Lafferty, J. and Zhai, C. Probabilistic relevance models
based on document and query generation. Language
Modeling and Information Retrieval. Kluwer Academic
Publishers, 2003, 1–10.
24. Lavrenko, V. and Croft, W. B. Relevance-based language
models. In Proceedings of SIGIR, 2001, 120–127.
25. Liu, T. Learning to Rank for Information Retrieval.
26. Mansour, Y., Slivkins, A., and Syrgkanis, V. Bayesian
incentive-compatible bandit exploration. In
Proceedings of EC, 2015.
27. Mansour, Y., Slivkins, A., Syrgkanis, V., and Wu, Z. S.
Bayesian Exploration: Incentivizing Exploration
in Bayesian Games. CoRR (2016); http://arxiv.org/
28. Mitra, B. and Craswell, N. Neural Models for
Information Retrieval. CoRR (2017); http://arxiv.org/
29. Nisan, N., Roughgarden, T., Tardos, E., and Vazirani,
V. V. Algorithmic Game Theory. Cambridge University
30. Procaccia, A. and Tennenholtz, M. Approximate
mechanism design without Money. In Proceedings of
31. Radinsky, K. and Bennett, P.N. Predicting content
change on the Web. In Proceedings of WSDM, 2013,
32. Raifer, N., Raiber, F., Tennenholtz, M., and Kurland,
O. Information retrieval meets game theory: The
ranking competition bet ween documents’ authors. In
Proceedings of SIGIR, 2017, 465–474.
33. Robertson, S.E. The Probability Ranking Principle in
IR. J. Documentation (1977), 294–304. Reprinted in
Readings in Information Retrieval, K. Sparck Jones
and P. Willett (eds), 1997, 281–286.
34. Roughgarden, T. and Tardos, E. How bad is selfish
routing? JACM 49, 2 (April 2002), 236–259.
35. Salton, J., Wong, A., and Yang, C.S. A vector space
model for automatic indexing. Commun. ACM 18, 11
(Nov. 1975), 613–620.
36. Santos, A.S.R., Pasini, B., and Freire, J. A first study
on temporal dynamics of topics on the Web. In
Proceedings of WW W, 2016, 849–854.
37. Shapley, L.S. A value for n-person games.
Contributions to the Theory of Games II, Harold W.
Kuhn and Albert W. Tucker, (eds.). Princeton University
Press, Princeton, NJ, 1953, 307–317.
38. Varian, H.R. Online ad auctions. American Economic
Review 99, 2 (2009).
39. Wang, L., Bennett, P. N., and Collins- Thompson, K.
Robust ranking models via risk-sensitive optimization.
In Proceedings of SIGIR, 2012, 761–770.
40. Yang, G. H., Sloan, M., and Wang, J. Dynamic
Information Retrieval Modeling. Morgan & Claypool
Moshe Tennenholtz ( firstname.lastname@example.org) is a
professor in the Faculty of Industrial Engineering at
Management at Technion, Haifa, Israel, where he holds
the Sondheumer Technion Academic Chair.
Oren Kurland ( email@example.com) is a professor
.in the Faculty of Industrial Engineering at Management
at Technion, Haifa, Israel.
© 2019 ACM 0001-0782/19/12 $15.00