[Asis-l] CFP: IRJ Special Issue on Information Retrieval Evaluation using Test Collections
Kelly, Diane
dianek at email.unc.edu
Fri Oct 10 13:07:21 EDT 2014
Call for Papers, Special Issue of Information Retrieval, a Springer Journal
Information Retrieval Evaluation Using Test Collections
-------------------------------------------------------
Information retrieval has a strong history of experimental evaluation. Test collections -- consisting of a set of queries, a collection of documents to be searched, and relevance judgments indicating which documents are relevant for which queries -- are perhaps the most widely used tool for evaluating the effectiveness of search systems.
Based on pioneering work carried out by Cyril Cleverdon and colleagues at Cranfield University in the 1960s, the popularity of test collections has flourished in large part thanks to evaluation campaigns such as the Text Retrieval Conference (TREC), the Cross-Language Evaluation Forum (CLEF), the NII Testbeds and Community for Information Access Research project (NTCIR), and the Forum for Information Retrieval Evaluation (FIRE).
Test collections have played a vital role in providing a basis for the measurement and comparison of the effectiveness of different information retrieval algorithms and techniques. However, test collections also present a number of issues, from being expensive and complex to construct, to instantiating a particular abstraction of the retrieval process.
Topics of interest for this special issue include but are not limited to:
Approaches for constructing new test collections
- choosing representative topics and documents
- minimizing effort
Test collection stability
- number of topics and documents required
- completeness of judgments (pooling, stratified sampling, ...)
Evaluation measures
- choosing measures
- relationship with higher-level search tasks and user goals
- relationship with collection features (assumptions regarding incomplete judgments, ...)
Relevance judgments
- approaches for gathering judgments (crowd-sourcing, dedicated judges, interfaces and support systems for relevance assessments, ...)
- types of judgments (single or multiple assessments, binary or multi-level, ...)
- human factors (topic creators versus assigned topics, assessor consistency, instructions to assessors, expertise, potential biases, ...)
Test collections as representations of the search process
- assumptions about search behaviour
- user simulation
Relationship between evaluation using test collections and other approaches
- test collections in comparison with user studies and log analyses
Reflections on existing and past initiatives
- TREC, NTCIR, CLEF, FIRE, ...
Important Dates
---------------
Initial submissions due: 30 April 2015
Initial reviewer feedback: 18 June 2015
Revised submissions due: 23 July 2015
Final decisions: 27 August 2015
Special Issue Editors
---------------------
Ben Carterette, University of Delaware
Diane Kelly, University of North Carolina
Falk Scholer, RMIT University
Diane Kelly / Associate Professor
School of Information and Library Science
University of North Carolina at Chapel Hill
216 Lenoir Drive, Manning Hall, RM 10, CB#3360
Chapel Hill, NC 27599-3360
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/asis-l/attachments/20141010/c62c0b86/attachment-0001.html>
More information about the Asis-l
mailing list