Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11851/2656
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGupta, S.-
dc.contributor.authorKutlu, Mücahid-
dc.contributor.authorKhetan, V.-
dc.contributor.authorLease, M.-
dc.date.accessioned2019-12-25T14:01:59Z
dc.date.available2019-12-25T14:01:59Z
dc.date.issued2019
dc.identifier.citationGupta, S., Kutlu, M., Khetan, V., and Lease, M. (2019, April). Correlation, Prediction and Ranking of Evaluation Metrics in Information Retrieval. In European Conference on Information Retrieval (pp. 636-651). Springer, Cham.en_US
dc.identifier.isbn9.78303E+12
dc.identifier.issn3029743
dc.identifier.urihttps://link.springer.com/chapter/10.1007%2F978-3-030-15712-8_41-
dc.identifier.urihttps://hdl.handle.net/20.500.11851/2656-
dc.description41st European Conference on Information Retrieval, ECIR ( 2019: Cologne; Germany )
dc.description.abstractGiven limited time and space, IR studies often report few evaluation metrics which must be carefully selected. To inform such selection, we first quantify correlation between 23 popular IR metrics on 8 TREC test collections. Next, we investigate prediction of unreported metrics: given 1–3 metrics, we assess the best predictors for 10 others. We show that accurate prediction of MAP, P@10, and RBP can be achieved using 2–3 other metrics. We further explore whether high-cost evaluation measures can be predicted using low-cost measures. We show RBP(p = 0.95) at cutoff depth 1000 can be accurately predicted given measures computed at depth 30. Lastly, we present a novel model for ranking evaluation metrics based on covariance, enabling selection of a set of metrics that are most informative and distinctive. A greedy-forward approach is guaranteed to yield sub-modular results, while an iterative-backward method is empirically found to achieve the best results. © Springer Nature Switzerland AG 2019.en_US
dc.description.sponsorshipAcknowledgements. This work was made possible by NPRP grant# NPRP 7-1313-1-245 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
dc.language.isoenen_US
dc.publisherSpringer Verlagen_US
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectInformation retrieval en_US
dc.subject search engines en_US
dc.subject relevance assessmentsen_US
dc.titleCorrelation, Prediction and Ranking of Evaluation Metrics in Information Retrievalen_US
dc.typeConference Objecten_US
dc.departmentFaculties, Faculty of Engineering, Department of Computer Engineeringen_US
dc.departmentFakülteler, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümütr_TR
dc.identifier.volume11437
dc.identifier.startpage636
dc.identifier.endpage651
dc.authorid0000-0002-4102-803X-
dc.identifier.scopus2-s2.0-85064873308en_US
dc.institutionauthorKutlu, Mücahid-
dc.identifier.doi10.1007/978-3-030-15712-8_41-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.scopusqualityQ2-
item.openairetypeConference Object-
item.languageiso639-1en-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
crisitem.author.dept02.3. Department of Computer Engineering-
Appears in Collections:Bilgisayar Mühendisliği Bölümü / Department of Computer Engineering
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

1
checked on Dec 21, 2024

Page view(s)

122
checked on Dec 23, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.