[Asis-l] An Image Retrieval Benchmarking Service? Comments Requested
J. Trant
jtrant at archimuse.com
Tue Apr 20 15:00:02 EDT 2004
Dear Colleagues,
Comments are requested on the following study commissioned by CLIR
into the feasibility of an image retrieval benchmarking service, and
its possible role in speeding the development and deployment of image
retrieval technology for the digital library.
Please forward your comments to me or to CLIR c/o <ksmith at clir.org>.
I'd appreciate it if you would share this request for comments
widely. The issues cut across many communities, and breadth of
interest and commitment is critical if the concept is to be
successfully developed.
Thank you.
jennifer.
Image Retrieval Benchmark Database Service:
A Needs Assessment and Preliminary Development Plan
A Report Prepared for the Council on Library and Information Resources
and the Coalition for Networked Information
Jennifer Trant, Archives & Museum Informatics
REPORT BODY
Text: http://www.clir.org/pubs/reports/trant04/tranttext.htm
PDF: http://www.clir.org/pubs/reports/trant04/tranttext.pdf
REFERENCES
Text: http://www.clir.org/pubs/reports/trant04/trantrefs.htm
PDF: http://www.clir.org/pubs/reports/trant04/trantrefs.pdf
EXECUTIVE SUMMARY
The rapid increase in the quantity of visual materials in digital
libraries-supported by significant advances in digital imaging
technologies-has not been supported by a corresponding advance in
image retrieval technologies and techniques. Digital librarians sense
that much could be done to improve access to visual collections and
hope, perhaps vainly, that users' needs to identify relevant digital
visual resources might be met more satisfactorily through search
strategies based on visual characteristics rather than on textual
metadata associated with the image, which are expensive to produce.
However, digital librarians currently have no tools for evaluating
either content-based or metadata-based image retrieval systems.
Consequently, they have difficulty assessing existing systems of
image access, evaluating proposed changes in these systems, or
comparing metadata-based and content-based image retrieval.
Some have proposed benchmarking as a solution to this problem. An
image retrieval benchmark database could provide a controlled context
within which various approaches could be tested. Equally important,
it might provide a focus for image retrieval research and help bridge
the significant divide between researchers exploring these two search
paradigms: metadata-based vs. content-based image retrieval. If so,
such a database could spur advances in research, as comparative
results make it possible to evaluate the effectiveness of particular
strategies and thereby add value to studies supported by many funding
agencies.
Creating an image retrieval benchmarking service would be a
significant undertaking. A benchmarking database is more than a
collection of images. Benchmarking requires a set of queries to be
put to that test collection. Each image in the test collection must
be assessed to determine whether it is relevant to that query.
Assessing the performance of systems requires a set of evaluation
metrics that make it possible to compare one system with another and
to rank results. Developing a test collection requires an investment
in data collection, documentation, enhancement, and distribution.
Most significantly, maintaining an image reference benchmarking
service requires that a community of researchers make a long-term
commitment to its use. Without a community vested in the development
of the database-and publishing research based on it-the collection
remains a chimerical solution to advancing the state of research and
improving the retrieval of visual materials in the digital library.
--
__________
J. Trant jtrant at archimuse.com
Partner & Principal Consultant phone: +1 416 691 2516
Archives & Museum Informatics fax: +1 416 352 6025
158 Lee Ave, Toronto
Ontario M4E 2P3 Canada http://www.archimuse.com
__________
More information about the Asis-l
mailing list