[Asis-l] Call for Nominations for ASIST Research Award
Richard Hill
rhill at asis.org
Tue Apr 18 14:07:45 EDT 2006
[Forwarded by request. Dick Hill]
** Apologies for cross postings **
CALL FOR PAPERS
SIGIR 2006 WORKSHOP ON EVALUATING EXPLORATORY SEARCH SYSTEMS
10 AUGUST 2006, UNIVERSITY OF WASHINGTON, SEATTLE, USA
http://umiacs.umd.edu/~ryen/eess
** Deadline for submissions: 15 May 2006 **
1. ORGANIZERS
Ryen White, University of Maryland
Gheorghe Muresan, Rutgers University
Gary Marchionini, University of North Carolina
2. EXPLORATORY SEARCH SYSTEMS (ESS)
Online search has become an increasingly important part of the everyday
lives of most computer users.
Search engines, bibliographic databases, and digital libraries provide
adequate support for those whose
information needs are well-defined. However, there are research and
development opportunities to improve
current search systems so users can succeed more often in situations
when: they lack the knowledge or
contextual awareness to formulate queries, they must navigate complex
information spaces, the search task
requires browsing and exploration, or system indexing of available
information is inadequate.
In those situations, people usually submit a tentative query and take
things from there, selectively seeking
and passively obtaining cues about where their next steps lie, i.e.,
they are engaged in an "exploratory search."
In some respects, exploratory search can be seen as a specialization of
information exploration - a broader
class of activities where new information is sought in a defined
conceptual area; exploratory data analysis
is another example of an information exploration activity. Exploratory
Search Systems (ESS) have been
developed to support serendipity, learning, and investigation, and
generally allow users to browse available
information.
3. AIMS AND TOPICS
3.1 Aims
Whilst search systems are expanding beyond supporting simple lookup into
supporting complex information-seeking
behaviors, there is no framework for how to evaluate this new genre of
search system. This workshop
aims to bring together researchers from communities such as information
retrieval, library and information
sciences, and human-computer interaction for a discussion of the issues
related to the formative and summative
evaluation of ESS. The focus in recent years has been on the development
of new systems and interfaces, not
how to evaluate them. Given the range of technology now available we
must turn attention toward understanding
the behaviors and preferences of searchers. For this reason we focus
this workshop on issues in evaluation,
arguably the single most pressing issue in the development of ESS.
The general aims of the workshop are to:
- Define metrics to evaluate ESS performance
- Establish what ESS should do well
- Focus on the searcher, their tasks, goals and behaviors
- Influence ESS designers to think more about evaluation
- Facilitate comparability between experimental sites developing ESS and
experiments involving ESS
- Discuss components for the non-interactive evaluation of ESS (e.g.,
searcher simulations)
3.2 Topics
We encourage participation based on, but not limited to, the following
topics:
- learning
- system log analysis
- task-oriented evaluation
- ethnography and field studies
- user performance and behaviors
- searcher simulations
- biometric data as evidence
- role of context
- metrics for ESS evaluation
- ESS evaluation frameworks
- mental models for exploratory search processes
- test collections
- novel exploratory search interfaces and interaction paradigms
4. SUBMISSION
4.1 Important Dates
Submissions: May 15, 2006
Notification of acceptance: June 16, 2006
Final camera ready submissions: June 30, 2006
Workshop: August 10, 2006
4.2 Requirements
Interested participants will be required to submit a 100 word biography
and a 2000 word position
paper in the SIGIR 2006 conference format.
The workshop will contain two paper sessions, containing at least one
presentation (with discussion)
on each of:
- Methodologies (e.g., test collections, simulations, ethnography,
task-oriented approaches)
- Metrics (e.g., measures of learning, user performance)
- Models (e.g., interfaces and interaction paradigms, mental models)
Papers that can be placed into these these categories will be given
preference, as will those
papers that concentrate on evaluation rather than technology. For each
submission the outcome of
the review process will be a recommendation to either: accept for
workshop presentation and working
notes, accept for working notes only, or reject.
Submissions should be emailed to ryen at umd.edu by midnight PST on May 15
2006.
More information about the Asis-l
mailing list