Please use this identifier to cite or link to this item:
|Title:||Correlation, prediction and ranking of evaluation metrics in information retrieval||Authors:||Gupta, S.
|Issue Date:||2019||Publisher:||Springer Verlag||Source:||Gupta, S., Kutlu, M., Khetan, V., and Lease, M. (2019, April). Correlation, Prediction and Ranking of Evaluation Metrics in Information Retrieval. In European Conference on Information Retrieval (pp. 636-651). Springer, Cham.||Abstract:||Given limited time and space, IR studies often report few evaluation metrics which must be carefully selected. To inform such selection, we first quantify correlation between 23 popular IR metrics on 8 TREC test collections. Next, we investigate prediction of unreported metrics: given 1–3 metrics, we assess the best predictors for 10 others. We show that accurate prediction of MAP, P@10, and RBP can be achieved using 2–3 other metrics. We further explore whether high-cost evaluation measures can be predicted using low-cost measures. We show RBP(p = 0.95) at cutoff depth 1000 can be accurately predicted given measures computed at depth 30. Lastly, we present a novel model for ranking evaluation metrics based on covariance, enabling selection of a set of metrics that are most informative and distinctive. A greedy-forward approach is guaranteed to yield sub-modular results, while an iterative-backward method is empirically found to achieve the best results. © Springer Nature Switzerland AG 2019.||Description:||41st European Conference on Information Retrieval, ECIR ( 2019: Cologne; Germany )||URI:||https://link.springer.com/chapter/10.1007%2F978-3-030-15712-8_41
|Appears in Collections:||Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering|
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Show full item record
checked on Sep 16, 2023
checked on Oct 2, 2023
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.