[Sigcr-l] Final call for participation: 17th SIG/CR Classification Research Workshop

Jonathan Furner furner at gseis.ucla.edu
Tue Oct 31 18:42:48 EST 2006


SOCIAL CLASSIFICATION: PANACEA OR PANDORA?
17th Annual ASIS&T SIG/CR Classification Research Workshop
Saturday, November 4, 2006 -- Austin, TX

FINAL CALL FOR PARTICIPATION

OVERVIEW
Researchers, practitioners, and students interested in social  
classification, folksonomies, social tagging, social bookmarking,  
collaborative indexing, collaborative annotation, etc., are invited  
to participate in the 17th ASIS&T SIG/CR Classification Research  
Workshop. Attendees will have the opportunity to contribute to the  
debate by actively participating in the workshop’s open panel sessions.

This workshop will be held at the Hilton Austin, 500 E 4th St,  
Austin, TX, from 8:30am to 5pm on Saturday, November 4, 2006, as part  
of the Annual Meeting of the American Society for Information Science  
and Technology (ASIS&T). It will be the 17th in a series of annual  
workshops organized by ASIS&T's Special Interest Group on  
Classification Research (SIG/CR). Please see the main ASIS&T AM06  
page at http://www.asis.org/Conferences/AM06/index.html for further  
general information about the ASIS&T Annual Meeting, including  
instructions on how to register for the SIG/CR Workshop using the  
online registration form at https://www.asis.org/Conferences/AM06/ 
am06regform.php. Please see http://www.slais.ubc.ca/USERS/sigcr/ for  
further information about SIG/CR.

***Preprints of the full papers, and abstracts of the posters, are  
available for download by workshop attendees from the SIG/CR website  
at http://www.slais.ubc.ca/USERS/sigcr/events.html.***

AGENDA
8:30 Coffee
9:00 Introduction
9:15 Keynote
10:15 Break
10:30 Panel 1: The Structure of Social Classification
12:00 1-Minute Madness, Poster Session, and Lunch
1:00 Panel 2: Discussion of Posters
1:30 Panel 3: Social Classification of Visual Resources
3:00 Break
3:15 Panel 4: Conceptual Frameworks for Social Classification
4:45 Wrap-Up

KEYNOTE
Tagging: It’s the interface, stupid!
Joseph Busch (Taxonomy Strategies, USA)

PANEL 1: THE STRUCTURE OF SOCIAL CLASSIFICATION
Exploring characteristics of social classification
Xia Lin, Joan E. Beaudoin, Yen Bui, Kaushal Desai, and Tony Moore  
(Drexel University, USA)

Searching the long tail: Hidden structure in social tagging
Emma Tonkin (UKOLN, UK)

Expertise classification: Collaborative classification vs. automatic  
extraction
Toine Bogers, Willem Thoonen, and Antal van den Bosch (Tilburg  
University, The Netherlands)

PANEL 2: POSTER DISCUSSION
Social bookmarking in the enterprise
Michael D. Braly and Geoffrey B. Froh (University of Washington, USA)

Cognitive operations behind tagging for one’s self and tagging for  
others
Judd Butler (Florida State University, USA)

Ranking patterns: A Flickr tagging system pilot study
Janet Capps (Florida State University, USA)

Folksonomies vs. bag-of-words: The evaluation and comparison of  
different types of document representations
Anatoliy Gruzd (University of Illinois at Urbana-Champaign, USA)

Social classification and online job banks: Finding the right words  
to find the right job
Kevin Harrington (Florida State University, USA)

Tag distribution analysis using the power law to evaluate social  
tagging systems: A case study in the Flickr database
Hong Huang (Florida State University, USA)

@toread and cool: Tagging for time, task, and emotion
Margaret E. I. Kipp (University of Western Ontario, Canada)

Ne’er-do-wells in Neverland: Mediation and conflict resolution in  
social classification environments
Chris Landbeck (Florida State University, USA)

Exploratory study of classification tags in terms of cultural  
influences and implications for social classification
Kyoungsik Na (Florida State University, USA)

Folksonomies or fauxsonomies: How social is social bookmarking?
Marina Pluzhenskaia (University of Illinois at Urbana-Champaign, USA)

Shared, persistent user search paths: Social navigation as social  
classification
Robert J. Sandusky (University of Tennessee, Knoxville, USA)

The use of collaborative tagging in public library catalogues
Louise Spiteri (Dalhousie University, Canada)

Using social bookmarks in an academic setting: PennTags
Jennifer Erica Sweda (University of Pennsylvania, USA)

PANEL 3: SOCIAL CLASSIFICATION OF VISUAL RESOURCES
Social classification and folksonomy in art museums: Early data from  
the steve.museum tagger prototype
Jennifer Trant (Archives & Museum Informatics / University of  
Toronto, Canada)

Viewer tagging in art museums: Comparisons to concepts and  
vocabularies of art museum visitors
Martha Kellogg Smith (University of Washington, USA)

User-defined classification on the online photo sharing site  
Flickr ... Or, How I learned to stop worrying and love the million  
typing monkeys
Megan Winget (University of Texas at Austin, USA)

PANEL 4: CONCEPTUAL FRAMEWORKS FOR SOCIAL CLASSIFICATION
An examination of authority in social classification systems
Melanie Feinberg (University of Washington, USA)

A phenomenological framework for the relationship between the  
Semantic Web and user-centered tagging systems
D. Grant Campbell (University of Western Ontario, Canada)

Social tagging and the next steps for indexing
Joseph T. Tennis (University of British Columbia, Canada)

AIMS
The aims of this year's Classification Research Workshop are to  
provide a forum for researchers, practitioners, and users to share  
their knowledge, perspectives, and opinions on social classification  
(SC), and (in the form of the proceedings) to make a lasting and  
authoritative contribution to our understanding of the benefits that  
SC-based systems may provide. In the original call, papers on any  
aspect of the conceptualization and/or evaluation of social  
classification were invited for presentation at the workshop and  
publication in the open-access, peer-reviewed proceedings.

Social classification is a convenient, generic label that may be used  
to refer to any of a number of broadly related processes by which the  
resources in a collection are categorized by multiple people over an  
ongoing period, with the potential result that any given resource  
will come to be represented by a set of labels or descriptors that  
have been generated by different people. The specific processes in  
question include indexing, tagging, bookmarking, annotation, and  
description of kinds that may be characterized as collaborative,  
cooperative, distributed, dynamic, community-based, folksonomic,  
wikified, democratic, user-assigned, or user-generated. The mid-2000s  
have seen rapid growth in levels of interest in these kinds of  
technique for generating descriptions of resources for the purposes  
of discovery, access, and retrieval. Systems that provide automated  
support for social classification may be implemented at low cost, and  
are perceived to contribute to the democratization of classification  
by empowering people, who might otherwise remain strictly consumers  
of information, to become information producers.

Efforts to conduct serious evaluations of the comparative  
effectiveness of such systems have begun, but results are scattered  
and piecemeal. Compared with retrieval systems based on traditional  
methods -- manual or automatic -- of classifying resources, how  
effectively are users of SC-based systems able to find the resources  
that they want? What is the impact on retrieval effectiveness of  
systems designers' decisions to pay limited attention to  
traditionally important components such as vocabulary control, facet  
analysis, and systematic hierarchical arrangement? Current  
implementations of SC tend to shy away, for instance, from imposing  
the kind of vocabulary control on which classification schemes and  
thesauri are conventionally founded: proponents argue that social  
classifiers should be free, as far as possible, to supply precisely  
those class labels that they believe will be useful to searchers in  
the future, whether or not those labels have proven useful in the  
past. But do the advantages that are potentially to be gained from  
allowing classifiers free rein in the choice of labels outweigh those  
that may be obtainable by imposing some form of vocabulary and  
authority control, by offering browsing-based interfaces to  
hierarchically structured vocabularies, by establishing and complying  
with policies for the specificity and exhaustivity of sets of labels,  
and/or by other devices that are designed to improve classifier-- 
searcher consistency?

Other questions arise as a result of the reliance of SC-based systems  
on volunteer labor. Given the distributed nature of SC, for example,  
how can it be ensured that every resource attracts a critical mass of  
descriptors, rather than just the potentially-quirky choices of a  
small number of volunteers? Given the self-selection of classifiers,  
how can it be ensured that they are motivated to supply class labels  
that they would expect other searchers to use? In general, are  
reductions in the costs of classification (borne by information  
producers) achieved only at the expense of increases in the costs of  
resource discovery (borne by consumers)?

PROGRAM COMMITTEE
Hanne Albrechtsen (Institute of Knowledge Sharing, Denmark)
Jack Andersen (Royal School of Library and Information Science, Denmark)
Clare Beghtol (University of Toronto, Canada)
Grant Campbell (University of Western Ontario, Canada)
Jonathan Furner (University of California, Los Angeles, USA) [co-chair]
Barbara Kwasnik (Syracuse University, USA)
Kathryn La Barre (University of Illinois at Urbana-Champaign, USA)
Joseph Tennis (University of British Columbia, Canada) [co-chair]
Douglas Tudhope (University of Glamorgan, UK)






More information about the Sigcr-l mailing list