[Sigdl-l] FW: [Asis-l] CFP: JCDL Music IR, Music Digital Library Evaluation
Workshop
Suzie Allard
slalla0@pop.uky.edu
Thu, 16 May 2002 21:46:37 -0400
For those SIG-DL members who may not be on the ASIS-L list.
Suzie
----------
From: "J. Stephen Downie" <jdownie@uiuc.edu>
Reply-To: jdownie@uiuc.edu
Date: Thu, 16 May 2002 13:58:04 -0500
To: undisclosed-recipients: ;
Subject: [Asis-l] CFP: JCDL Music IR, Music Digital Library Evaluation
Workshop
Greetings colleagues:
The CFP is intended to encourage all that have an interest in Music IR
and Music Digital Library research to consider submitting to, and/or
participating in, the upcoming Workshop on the Creation of Standardized
Test Collections, Tasks, and Metrics for Music Information Retrieval
(MIR) and Music Digital Library (MDL) Evaluation. Joint ACM/IEEE
Conference on Digital Libraries, Portland, OR, July 14-18, 2002. The
original Workshop outline can be found at:
http://www.ohsu.edu/jcdl/ws.html#W4. The Workshop itself will be held
Thursday, 18 July 2002 at the JCDL conference venue.
Please visit the following URLs for important details:
http://music-ir.org/MIR_MDL_evaluation.html
http://music-ir.org/JCDL_Workshop_Info.html
Please forward this to anyone you think might be interested.
Cheers, and thanks.
J. Stephen Downie
************************************************************
Open Workshop Questions and Topics:
The following, non-exclusive (nor all-encompassing) list of open
questions should help you understand just a few of the many possible
paper and discussion topics to be tackled at the Workshop:
--As a music librarian, are there issues that evaluation standards must
address for their results to be credible? Do you know of possible
collections that might form the basis of a test collection? What prior
research should we be considering?
--As a musicologist, what things need examination that are possibly
being overlooked?
As digital library (DL) developer, what standards for evaluation should
we borrow from the traditional DL community? Any perils or pitfalls that
we should consider?
--As an audio engineer, what do you need to test your approaches? What
methods have worked in other contexts that might or might not work in
the MIR/MDL contexts?
--As an information retrieval specialist, what lessons have you learned
about other traditional IR evaluation frameworks? Any suggestions about
what to avoid or consider as we build our MIR/MDL evaluation frameworks
from "scratch"?
--As an intellectual property expert, what rights and responsibilities
will we have as we strive to build and distribute our test collections?
--As an interface/human computer interaction (HCI) expert, what tests
should we consider to validate our many different types of interfaces?
--As a business person, what format of results will help you make
selection decisions? Are their business research models and methods that
should be considered?
--As a computer scientist, what are the strengths and weaknesses of the
CS approach to validation in the MIR/MDL context?
etc.
These are just a few of the possible questions/topics that will be
addressed. The underlying questions are:
1.How do we determine, and then appropriately classify, the tasks
that should make up the legitimate purviews of the MIR/MDL domains?
2.What do we mean by "success"? What do we mean by "failure"?
3.How will we decide that one MIR/MDL approach works better than
another?
4.How do we best decide which MIR/MDL approach is best suited for a
particular task?
--
**********************************************************
"Research funding makes the world a better place"
**********************************************************
J. Stephen Downie, PhD
Assistant Professor,
Graduate School of Library and Information Science; and,
Fellow, National Center for Supercomputing Applications (2000-01)
University of Illinois at Urbana-Champaign
(217) 351-5037
_______________________________________________
Asis-l mailing list
Asis-l@asis.org
http://mail.asis.org/mailman/listinfo/asis-l