Overview of the TREC 2020 Fair Ranking track

A. Biega, F. Diaz, M. D. Ekstrand, S. Kohlmeier
TREC 2020
For 2020, we again adopted an academic search task, where we have a corpus of academic article abstracts and queries submitted to a production academic search engine. The central goal of the Fair Ranking track is to provide fair exposure to different groups of authors (a group fairness framing). We recognize that there may be multiple group definitions (e.g. based on demographics, stature, topic) and hoped for the systems to be robust to these. We expected participants to develop systems that optimize for fairness and relevance for arbitrary group definitions, and did not reveal the exact group definitions until after the evaluation runs were submitted. The track contains two tasks, reranking and retrieval, with a shared evaluation. Rerank runs sorted a query-dependent list of documents to simultaneously provide fairness and relevance. Retrieval runs returned 100-item rankings from the corpus in response to a query string. The track organizers provided a sequence of queries, each accompanied by a varying-size set of documents. Both tasks used the same queries; participants were asked not to use the test queries' rerank sets as a component of their retrieval model training.

bibtex

Copied!
@inproceedings{trec-fair-ranking-2020, year = {2020}, title = {{Overview of the TREC 2020 Fair Ranking Track}}, booktitle = {{The Twenty-Eighth Text REtrieval Conference (TREC 2020) Proceedings}}, author = {Asia J. Biega and Fernando Diaz and Michael D. Ekstrand and Sergey Feldman and Sebastian Kohlmeier} }