[Sigvis-l] CFP: JCDL Music IR, Music Digital Library Evaluation Workshop
J. Stephen Downie
jdownie at uiuc.edu
Thu May 16 15:57:54 EDT 2002
Greetings colleagues:
This CFP is intended to encourage all that have an interest in Music IR
and Music Digital Library research to consider submitting to, and/or
participating in, the upcoming Workshop on the Creation of Standardized
Test Collections, Tasks, and Metrics for Music Information Retrieval
(MIR) and Music Digital Library (MDL) Evaluation. Joint ACM/IEEE
Conference on Digital Libraries, Portland, OR, July 14-18, 2002. The
original Workshop outline can be found at:
http://www.ohsu.edu/jcdl/ws.html#W4. The Workshop itself will be held
Thursday, 18 July 2002 at the JCDL conference venue.
Please visit the following URLs for important details:
http://music-ir.org/MIR_MDL_evaluation.html
http://music-ir.org/JCDL_Workshop_Info.html
Please forward this to anyone you think might be interested.
Cheers, and thanks.
J. Stephen Downie
************************************************************
Open Workshop Questions and Topics:
The following, non-exclusive (nor all-encompassing) list of open
questions should help you understand just a few of the many possible
paper and discussion topics to be tackled at the Workshop:
--As a music librarian, are there issues that evaluation standards must
address for their results to be credible? Do you know of possible
collections that might form the basis of a test collection? What prior
research should we be considering?
--As a musicologist, what things need examination that are possibly
being overlooked?
--As a digital library (DL) developer, what standards for evaluation
should
we borrow from the traditional DL community? Any perils or pitfalls that
we should consider?
--As an audio engineer, what do you need to test your approaches? What
methods have worked in other contexts that might or might not work in
the MIR/MDL contexts?
--As an information retrieval specialist, what lessons have you learned
about other traditional IR evaluation frameworks? Any suggestions about
what to avoid or consider as we build our MIR/MDL evaluation frameworks
from "scratch"?
--As an intellectual property expert, what rights and responsibilities
will we have as we strive to build and distribute our test collections?
--As an interface/human computer interaction (HCI) expert, what tests
should we consider to validate our many different types of interfaces?
--As a business person, what format of results will help you make
selection decisions? Are their business research models and methods that
should be considered?
--As a computer scientist, what are the strengths and weaknesses of the
CS approach to validation in the MIR/MDL context?
etc.
These are just a few of the possible questions/topics that will be
addressed. The underlying questions are:
1.How do we determine, and then appropriately classify, the tasks
that should make up the legitimate purviews of the MIR/MDL domains?
2.What do we mean by "success"? What do we mean by "failure"?
3.How will we decide that one MIR/MDL approach works better than
another?
4.How do we best decide which MIR/MDL approach is best suited for a
particular task?
--
**********************************************************
"Research funding makes the world a better place"
**********************************************************
J. Stephen Downie, PhD
Assistant Professor,
Graduate School of Library and Information Science; and,
Fellow, National Center for Supercomputing Applications (2000-01)
University of Illinois at Urbana-Champaign
(217) 351-5037
More information about the Sigvis-l
mailing list