From rsandusky at gmail.com Wed May 2 09:03:57 2018 From: rsandusky at gmail.com (Robert Sandusky) Date: Wed, 2 May 2018 09:03:57 -0400 Subject: [Sigdl-l] DataONE Webinar May 8th: What it means to be a Member Node Message-ID: Register for the free DataONE webinar: What it means to be a Member Node. Our next webinar in the 2017/8 DataONE Webinar Series will be held on *Tuesday May 8th at 0900 Pacific / 1000 Mountain / 1100 Central / 1200 Eastern*. This webinar ends our academic calendar and brings us towards our upcoming DataONE Users Group meeting (https://www.dataone.org/ dataone-users-group/2018-meeting) where we will be engaging with current and future participants in the DataONE network. The webinar, titled ?*What it means to be a Member Node: Member Nodes share their views*?, will showcase some current DataONE members and provide an overview on the DataONE federated network. The webinar will be a panel presentation by DataONE team members (*Dave Vieglais, Monica Ihli, Amy Forrester*) and repository leads (*Mark Sevilla - EDI, James Duncan - FEMC, Ken Casey - NCEI*). We hope you can join us. Register at: https://dataone.zoom.us/webinar/register/WN_ jphOUASHQS6WdsxZtnU0JA Full information and can be found at: https://www.dataone.org/ upcoming-webinar. Abstract and bios below. DataONE webinars are recorded and made available online later the same day. You can review previous webinars at: https://www.dataone.org/ previous-webinars/2018 Best Amber *Abstract* DataONE Member Nodes are the key to getting research data available through DataONE and are the site where data is gathered, managed, and stored. As part of the DataONE federation, Member Nodes expose all or portions of their data products by implementing a common set of service interfaces. Member Nodes are typically existing data repositories within the earth science domain and often already fill an important role in their respective communities supporting data management, curation, discovery, and access functions. These preservation-oriented repositories invest time and resources to join DataONE?s persistent, reliable, and sustainable cyberinfrastructure with the common goal to unite environment-based research through its distributed architecture. The benefits of which can lead to better visibility and dissemination of their data, long-term data management, and broader community engagement. *Speaker Bios* As the Director for Development and Operations at DataONE, *Dave Vieglais* oversees development and implementation of architecture, computer science research, and technological evolution through the activities of the Working Groups and the Cyberinfrastructure CIT, including the staff of full-time developers and post-docs. Dave has extensive experience in developing technical infrastructure for integrating biodiversity information at the global level (i.e. DiGIR, Species Analyst). He also brings significant biodiversity modeling expertise and leadership experience in Global Biodiversity Information Facility (GBIF) and the Natural Science Collections Alliance. *Monica Ihli* is a DataONE developer operating from the Center for Information & Communication Studies at the University of Tennessee. She specializes in the systems which integrate contributing Member Nodes into the DataONE Federation. *Amy Forrester* is the DataONE Member Node Coordinator and located at the University of Tennessee, Knoxville. Her main responsibility is relationship management between DataONE Cyberinfrastructure and both contributing and potential Member Nodes. *Mark Servilla* is Principal Investigator of EDI and is based at the University of New Mexico. Mark leads the development of the PASTA data repository software. Mark has a MS in Computer Science and a PhD in Earth and Planetary Sciences. *Jim Duncan* serves as the director of the Forest Ecosystem Monitoring Cooperative, where he strives to improve access to information and monitoring of forested ecosystems in the northeast. He supports Cooperators by making long-term monitoring data on the region?s forested ecosystems more accessible, providing needed aggregation and syntheses of disparate data into products that are more useful for seeing and responding to change, and building new regional networks for greater collaboration in monitoring. He also supports interdisciplinary teams in UVM's Rubenstein School of Environment and Natural Resources with spatiotemporal analysis and integration of social and ecological data, and serves on his towns tree board. He previously worked to increase transparency in the oil, gas and mining sectors by giving decision makers and citizens tools to map and interact with data, including in Mongolia and Ghana. *Ken Casey* is the Deputy Director of the Data Stewardship Division in the NOAA National Centers for Environmental Information (NCEI) and is currently fulfilling the role of the Director as well. Ken provides leadership and guidance to NCEI staff and sets the technical direction of division activities, projects, and programs. He coordinates across NCEI and with the broader community to promote NCEI as a responsible citizen of the global environmental data management community, leveraging from and contributing to relevant activities of that community Amber E Budden, PhD Director for Community Engagement and Outreach DataONE University of New Mexico 1312 Basehart SE Albuquerque NM 87106 cell: 505.205.7675 aebudden at dataone.unm.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From songphan at gmail.com Wed May 16 06:41:14 2018 From: songphan at gmail.com (Songphan Choemprayong) Date: Wed, 16 May 2018 17:41:14 +0700 Subject: [Sigdl-l] Survey: Interest in doctoral education in information field Message-ID: --- apologies for cross-posting --- If you are thinking/planning about pursuing a doctoral degree in information-related field, please help us complete a questionnaire at http://bit.ly/chulaphdissurvey This questionnaire aims to understand public interest in pursuing doctoral education in information-related field. The questionnaire would take about 5 minutes or less to complete. No private information is collected. You may leave the questionnaire at any point. The questionnaire was developed by the Graduate Program in Information Studies, Chulalongkorn University in Bangkok, Thailand. If you have any questions or comments, please contact libsci at chula.ac.th. -- Songphan Choemprayong, Ph.D. Assistant Professor Department of Library Science Faculty of Arts Chulalongkorn University Bangkok 10330 Thailand songphan at gmail.com From conference at icdim.org Wed May 16 07:44:02 2018 From: conference at icdim.org (conference at icdim.org) Date: Wed, 16 May 2018 17:14:02 +0530 Subject: [Sigdl-l] ICDIM 2018 In-Reply-To: References: Message-ID: <960dcc960193f40dd8131f33aa0ebf65@icdim.org> Thirteenth International Conference on Digital Information Management (ICDIM 2018) September 24-26, 2018 Berlin, Germany http://www.icdim.org Technically Co-sponsored by IEEE Technology and Engineering Management Society Following the successful earlier conferences at Bangalore (2006), Lyon (2007), London (2008), Michigan (2009) , Thunder Bay (2010), Melbourne (2011), Macau (2012), Islamabad (2013), Bangkok (2014) Jeju (2015) Porto (2016) and Fukuoka (2017), the Thirteenth event is being organized at Berlin, Germany in 2018. The International Conference on Digital Information Management is a multidisciplinary conference on digital information management, science and technology. The principal aim of this conference is to bring people in academia, research laboratories and industry together, and offer a collaborative platform to address the emerging issues and solutions in digital information science and technology. Digital Information technologies are gaining maturity and rapid momentum in adoption across disciplines. The digital community is producing new ways of using digital information technologies for integrating and making sense out of various data ranging from real/live streams and simulations to analytics data analysis, in support of mining of knowledge. The conference will feature original research and industrial papers on the theory, design, and implementation of digital information systems, as well as demonstrations, tutorials, workshops and industrial presentations. The Thirteenth International Conference on Digital Information Management will be held during September 24-26, 2018 at Berlin, Germany The topics in ICDIM 2018 include but are not confined to the following areas. Information Retrieval Data Grids, Data and Information Quality Big Data Management Data Warehouses and Data Mining Web Mining including Web Intelligence and Web 3.0 E-Learning, eCommerce, e-Business and e-Government Natural Language Processing XML and other extensible languages Web Metrics and its applications Enterprise Computing Semantic Web, Ontologies and Rules Human-Computer Interaction Artificial Intelligence and Decision Support Systems Knowledge Management Ubiquitous Systems Peer to Peer Data Management Interoperability Mobile Data Management Data Models for Production Systems and Services Data Exchange issues and Supply Chain Data Life Cycle in Products and Processes Case Studies on Data Management, Monitoring and Analysis Security and Access Control Information Content Security Mobile, Ad Hoc and Sensor Network Security Distributed information systems Information visualization Web services Quality of Service Issues Multimedia and Interactive Multimedia Image Analysis and Image Processing Video Search and Video Mining Cloud Computing Intelligence Systems Artificial Intelligence Applications SUBMISSIONS AT http://www.icdim.org/submission.html Co-located Workshops Fourth Workshop on Internet of Everything (IoE 2018) Fourth International Workshop on 'Future Big Data' (FBD 2018) Fourth Workshop on Intelligent Information Systems (IIS 2018) Seventh Workshop on "Advanced Techniques on Data Analytics and Data Visualization" (ATDADV 2018) Sixth International Workshop on Data Science (IWDS 2018) Modified version of the selected papers will appear in the special issues of the following peer reviewed journals. 1. Journal on Data Semantics 2. Technologies 3. Data Technologies and Applications 4. Webology 5. Journal of Digital Information Management 6. International Journal of Computational Linguistics Important Dates Full Paper Submission: July 08, 2018 Notification of Acceptance/Rejection: August 08, 2018 Registration: September 10, 2018 Camera Ready: September 10, 2018 Workshops/Tutorials/Demos: September 13, 2017 Main conference September: 24-26, 2018 SUBMISSIONS AT http://www.icdim.org/submission.html Program Committee General Chair Stefan Covaci, Technische University at Berlin, Germany Thomas Jell, Siemens, Germany Program Chairs Pit Pichappan, Digital Information Research Labs, India & UK Simon Fong, University of Macau, Macau Yao-Liang Chung, National Taiwan Ocean University, Taiwan Co-Chairs Manabu Ohta, Okayama University, Japan Robert Bierwolf, IEEE TEMS, Netherlands Feliz Lustenberger, Espros Photonics Corporation, Switzerland SUBMISSIONS AT http://www.icdim.org/submission.html Contact: conference at icdim.org From ferro at dei.unipd.it Fri May 18 02:37:54 2018 From: ferro at dei.unipd.it (Nicola Ferro) Date: Fri, 18 May 2018 08:37:54 +0200 Subject: [Sigdl-l] Call for Paper GLARE 2018 co-located with CIKM 2018 - 1st International Workshop on Generalization in Information Retrieval: Can We Predict Performance in New Domains? Message-ID: <44D0FB0F-EF37-4541-A641-37A8E2311E34@dei.unipd.it> Apologies for cross-postings. Call for papers 1st International Workshop on Generalization in Information Retrieval: Can We Predict Performance in New Domains? (GLARE 2018) co-located with 27th ACM International Conference on Information and Knowledge Management (CIKM 2018) - 22 October 2018, Turin, Italy http://glare2018.dei.unipd.it/ AIMS AND SCOPE -------------------------- Research in IR puts a strong focus on evaluation, with many past and ongoing evaluation campaigns. However, most evaluations utilize offline experiments with single queries only, while most IR applications are interactive, with multiple queries in a session. Moreover, context (e.g., time, location, access device, task) is rarely considered. Finally, the large variance of search topic difficulty make performance prediction especially hard. Several types of prediction may be relevant in IR. One case is that we have a system and a collection and we would like to know what happens when we move to a new collection, keeping the same kind of task. In another case, we have a system, a collection, and a kind of task, and we move to a new kind of task. A further case is when collections are fluid, and the task must be supported over changing data. Current approaches to evaluation mean that predictability can be poor, in particular: Assumptions or simplifications made for experimental purposes may be of unknown or unquantified validity; they may be implicit. Collection scale (in particular, numbers of queries) may be unrealistically small or fail to capture ordinary variability. Test collections tend to be specific, and to have assumed use-cases; they are rarely as heterogeneous as ordinary search. The processes by which they are constructed may rely on hidden assumptions or properties. Test environments rarely explore cases such as poorly specified queries, or the different uses of repeated queries (re-finding versus showing new material versus query exploration, for example). Characteristics such as "the space of queries from which the test cases have been sampled" may be undefined. Researchers typically rely on point estimates for the performance measures, instead of giving confidence intervals. Thus, we are not even able to make a prediction about the results for another sample from the same population. A related confound is that highly correlated measures (for example, Mean Average Precision (MAP) vs normalized Discounted Cumulative Gain (nDCG)) are reported as if they were independent; while, on the other hand, measures which reflect different quality aspects (such as precision and recall) are averaged (usually with a harmonic mean), thus obscuring their explanatory power. Current analysis tools are focused on sensitivity (differences between systems) rather than reliability (consistency over queries). Summary statistics are used to demonstrate differences, but the differences remain unexplained. Averages are reported without analysis of changes in individual queries. Perhaps the most significant issue is the gap between offline and online evaluation. Correlations between system performance, user behavior, and user satisfaction are not well understood, and offline predictions of changes in user satisfaction continue to be poor because the mapping from metrics to user perceptions and experiences is not well understood. TOPICS OF INTEREST -------------------------------- General areas of interests include, but are not limited to, the following topics: Measures: We need a better understanding of the assumptions and user perceptions underlying different metrics, as a basis for judging about the differences between methods. Especially, the current practice of concentrating on global measures should be replaced by using sets of more specialized metrics, each emphasizing certain perspectives or properties. Furthermore, the relationships between system-oriented and user-/task-oriented evaluation measures should be determined, in order to obtain a better improved prediction of user satisfaction and attainment of end-user goals. Performance analysis: Instead of regarding only overall performance figures, we should develop rigorous and systematic evaluation protocols focused on explaining performance differences. Failure and error analysis should aim at identifying general problems, avoiding idiosyncratic behavior associated with characteristics of systems or data under evaluation. Assumptions: The assumptions underlying our algorithms, evaluation methods, datasets, tasks, and measures should be identified and explicitly formulated. Furthermore, we need strategies for determining how much we are departing from them in new cases. Application features: The gap between test collections and real-world applications should be reduced. Most importantly, we need to determine the features of datasets, systems, contexts, tasks that affect the performance of a system. Performance Models: We need to develop models of performance which describe how application features and assumptions affect the system performance in terms of the chosen measure, in order to leverage them for prediction of performance. Papers should be formatted according to the ACM SIG Proceedings Template (http://www.acm.org/publications/proceedings-template ). Beyond research papers (4-6 pages), we will solicit short (1 page) position papers from interested participants. Papers will be peer-reviewed by members of the program committee through double-blind peer review, i.e. authors must be anonymized. Selection will be based on originality, clarity, and technical quality. Papers should be submitted in PDF format to the following address: https://easychair.org/conferences/?conf=glare2018 Accepted papers will be published online as a volume of the CEUR-WS proceeding series. ORGANIZERS -------------------- Ian Soboroff, National Institute of Standards and Technology (NIST), USA, ian.soboroff (at) nist.gov Nicola Ferro, University of Padua, Italy ferro (at) dei.unipd.it Norbert Fuhr, University of Duisburg-Essen, Germany norbert.fuhr (at) uni-due.de IMPORTANT DATES ----------------------------- Submission deadline: July 9, 2018 Notification of acceptance: July 30, 2018 Camera ready: August 27, 2018 Workshop day: October 22, 2018 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rscott at asist.org Fri May 25 13:09:52 2018 From: rscott at asist.org (Rodneikka Scott) Date: Fri, 25 May 2018 17:09:52 +0000 Subject: [Sigdl-l] This listserv moving to New ASIS&T Community Message-ID: ASIS&T will begin transitioning this Listserv from its existing software platform to a new and improved community platform: the ASIS&T Community (http://community.asist.org/home). This transition is necessary because it ASIS&T to comply with General Data Protection Regulation (GDPR) which goes into effect on May 25, 2018 which requires specific compliance that we are not able to achieve using the current system. As a result, communication using this platform has been suspended. Those who are ASIS&T Members, your subscription will be migrated to a new "eGroup" on the ASIS&T Community site. Not an ASIS&T Member? We have a solution for you. Please keep an eye on your inbox for a message regarding steps to join the ASIS&T Community site. Here's a quick overview of some of the new features you can expect in the ASIS&T Community: * Enhanced discussion capabilities. Now you'll receive emails that are more structured and easier to read than a traditional listserv or forum alert. * Improved Member Directory search. You can find members by name, location, SIG affiliation, area of expertise, work setting, and more. * Granular privacy controls. You can have complete control over what information you share with members of the community and your contacts. * Centralized subscription management. You can manage your subscriptions to all discussions in one place. Choose to receive daily digests or real-time emails by group. * Resource sharing. All attachments posted to discussions are archived in a dedicated Resource Library. You can also add documents to share anytime you want. You will receive final notification on or before June 22nd once the transition has been completed. Stay Connected! Learn more about ASIS&T Member Benefits. Join ASIS&T Today. -------------- next part -------------- An HTML attachment was scrubbed... URL: