From m.thelwall at blueyonder.co.uk Thu Jun 1 05:24:32 2017 From: m.thelwall at blueyonder.co.uk (thelwall mike) Date: Thu, 1 Jun 2017 10:24:32 +0100 (BST) Subject: [Sigmetrics] Alternative Indicators Summer School, 11-12 September 2017 in Wolverhampton, UK Message-ID: <865856590.1335432.1496309072077.JavaMail.open-xchange@oxbe3.tb.ukmail.iss.as9143.net> There are still three places left for the Alternative Indicators Summer School. This is aimed at research evaluators, PhD students and researchers that are interested in webometrics and altmetrics. The objective of the summer school is to make the use of alternative indicators possible for routine research evaluations. It is a practical event that will describe how gather and analyse a variety of web-based indicators. Provisional Schedule Day 1: Theory. This will introduce the new theoretical model of alternative indicators and describe methods to evaluate indicators. Attendees have the option to submit a short abstract to give a short presentation on their own relevant work, if accepted. Day 1 is mainly aimed at active researchers. 10am: Registration and welcome 10.30-11.30: Web Citations for Research Evaluation ? Kayvan Kousha 11.45-1pm: Strategies for conducting evaluations; strategies for selecting indicators 2pm-3pm: Comparing the average indicator score of groups of articles 3.15pm-4pm: Comparing the proportion with a non-zero indicator score for groups of articles 4pm-5pm: Attendee presentations. Day 2: Practice. This will start with an overview of a range of alternative indicators and will then introduce Webometric Analyst, a free software suite that gathers data for many alternative indicators and calculates advanced indicators for them. This will be followed by practical workshop sessions during which attendees will use the software to gather alternative indicator data and calculate and benchmark indicator values. This course will demonstrate how to gather and calculate indicators (benchmarked in the sense of field and year normalised) using free web-based data for commercial impact (patents), educational impact (syllabus mentions, PowerPoint mentions), informational impact (Wikipedia citations) and grey literature impact. Attendees can bring their own data in order to calculate a range of web indicators for it on the day. 10am-11.30: Presentation 1 and Workshop 1: Overview of web indicators and gathering sets of publications 11.45-1pm: Presentation 2 and Workshop 2: Calculating indicators 2pm-3pm: Presentation 3 and Workshop 3: Mendeley indicators 3.15pm-4.15 pm Presentation 4 and Workshop 4: Web indicators 4.15pm-5pm Workshop 5: Extra considerations; Workshop 6: Advanced tasks; Close Dates: September 11 and 12, 2017. (Monday and Tuesday). Cost: Free Location: University of Wolverhampton, in Wolverhampton city centre. Course lecturers: Mike Thelwall and Kayvan Kousha of the Statistical Cybermetrics Research Group at the University of Wolverhampton. Application: Please email Mike Thelwall m dot thelwall at wlv.ac.uk with your name, affiliation, and reason for attending (1 sentence). Include a title and 200 word abstract if you would like to make a brief presentation on day 1. http://cybermetrics.wlv.ac.uk/SummerSchoolSeptember2017.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From editor at jscires.org Thu Jun 1 05:33:16 2017 From: editor at jscires.org (Editor/ J Scientometric Res.) Date: Thu, 1 Jun 2017 15:03:16 +0530 Subject: [Sigmetrics] =?utf-8?q?JSCIRES_Jan-April_2017_Issue_is_now_online?= =?utf-8?q?_=7C_Table_of_Contents_=28ToC=29_includes_=E2=80=8BMemor?= =?utf-8?q?y_of_Eugene_Garfield_by_Kretschmer?= Message-ID: ? *Journal of Scientometric Research, Vol 6, Issue 1, Jan-Apr, 2017* http://jscires.org/v6/i1 *Table of Contents* *Perspective Paper* - ?? Memory of Eugene Garfield | Hildrun Kretschmer, Theo Kretschmer | Journal of Scientometric Research, 6(1):1-5. *Research Articles* - Age, Gender and Research Productivity: A Study of Speech and Hearing Faculty in India | Ramkumar Subramanian, Narayanasamy Nammalvar | Journal of Scientometric Research, 6(1):6-14 - Linear Regression Analysis of Title Word Count and Article Time Cited using R | Alireza Mohebbi, Yousef Douzandegan | Journal of Scientometric Research, 6(1):15-22 - Exploring ?Global Innovation Networks? In Bio clusters: A Case of Genome Valley in Hyderabad, India | Nimita Pandey, Pranav N. Desai | Journal of Scientometric Research, 6(1):23-35 - Quantitative Measuring of Research Output of Engineering Colleges in Karnataka based on Web of Science Database | Aragudige Nagaraja, Gangadhar K. C, Vasantha Kumar | Journal of Scientometric Research, 6(1):36-46 *Research Notes* - Bibliometric Characteristics and Citation Impact of Funded Research: A Case Study of Tribology | B. Elango | Journal of Scientometric Research, 6(1):47-50 - Actionable Causes of Alzheimer's disease | Ronald N. Kostoff | Journal of Scientometric Research, 6(1):51-53 - Citation Networks Analysis: A New tool for Understanding Science Dynamics with Implications Towards Science Policy | Manoj Changat, Thara Prabhakaran, Hiran H. Lathabhai | Journal of Scientometric Research, 6(1):54-56 *Book Reviews * - Rethinking Revolutions: Soya bean, Chou pals, and the Changing Countryside in Central India | Poonam Pandey | Journal of Scientometric Research, 6(1):57-59 - Scientifically Yours: Selected Indian Women Scientists | Sharique Hassan Manazir | Journal of Scientometric Research, 6(1):60-61 - Cycles of Invention and Discovery: Rethinking the Endless Frontier | Sanghamitra Das | Journal of Scientometric Research, 6(1):62-64 Read this issue online: http://jscires.org/v6/i1 -- Dr. Sujit Bhattacharya Editor-in-Chief, Journal of Scientometric Research [An Official Publication of Wolters Kluwer Health - Medknow, and SciBiolMed.Org] Professor AcSIR| Academy of Scientific Research & Innovation Senior Principal Scientist (NISTADS) Dr. K.S. Krishnan Marg, Pusa Campus, New Delhi-110012, INDIA Landline: +91-11-25843024 ? | Mobile: +91-9999020157 Email: editor at jscires.org Twitter: @JSCIRES http://twitter.com/JSCIRES Google Scholar Profile: http://scholar.google.co.in/citations?hl=en&user=c3d1afEAAAAJ Website: ? www.jscires.org Online manuscript submission: http://www.journalonweb.com/jscires/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From narinf at cox.net Sat Jun 3 19:23:40 2017 From: narinf at cox.net (Francis Narin) Date: Sat, 3 Jun 2017 16:23:40 -0700 Subject: [Sigmetrics] =?utf-8?q?_=E2=80=9CEuropean_Paradox_or_Delusion-Are?= =?utf-8?q?_European_Science_and_Economy_Outdated=3F?= Message-ID: <10443d7f-6963-010a-ef15-130e3b130e8d@cox.net> Dear Colleagues, Alonso Rodriguez-Navarro and I have just published a paper with, we feel, significant EU Science Policy implications. Copies are available from either of us. The paper is ?European Paradox or Delusion-Are European Science and Economy Outdated? Alonso Rodriguez-Navarro and Francis Narin, _Science and Public Policy_, 2016, pp 1-10." Francis Narin Narinf at cox.net Alonso Rodriguez-Navarro alonso.rodriguez at upm.es -------------- next part -------------- An HTML attachment was scrubbed... URL: From haustein.stefanie at gmail.com Mon Jun 5 23:49:30 2017 From: haustein.stefanie at gmail.com (Stefanie Haustein) Date: Tue, 06 Jun 2017 03:49:30 +0000 Subject: [Sigmetrics] Force11 Scholarly Communication Institute Message-ID: ****Apologies for cross-posting* *The latest trends in communication your research: Force11 Scholarly Communication Institute (July 31-Aug 4, 2017)* The Force11 Scholarly Communications Institute (FSCI) is a week-long intensive summer training program in the latest trends in research and data publication (www.force11.org/fsci). Come learn how you can increase your impact and profile from leading Scholarly Communication researchers. *When:* July 31 - August 4, 2017 *Where:* University of California, San Diego (La Jolla, CA) *Early bird:* Register before July 8, 2017 to receive a discount The FORCE11 Scholarly Communications Institute at the University of California, San Diego is a week long summer training course, incorporating intensive coursework, seminar participation, group activities, lectures and hands-on training. Participants will attend courses taught by world-wide leading experts in scholarly communications. Participants will also have the opportunity to discuss the latest trends and gain expertise in new technologies in research flow, new forms of publication, new standards and expectations, and new ways of measuring and demonstrating success that are transforming science and scholarship. *COURSES OFFERED AT FSCI 2017* ? Inside Scholarly Communications Today ? Scholarship in the 21th Century ? Building an Open and Information-rich Research Institute ? Research Reproducibility in Theory and Practice ? When 'Global' is Local: Scholarly Communications in the Global South ? Starting Out: Skills and Tools for Early Career Knowledge Workers ? Data in the Scholarly Communications Life Cycle ? Open Humanities 101 ? Data Citation Implementation for Data Repositories ? Open Annotation Tools and Techniques ? Communication and Advocacy for Research Transparency ? Opening the Sandbox: Supporting Student Research as a Gateway to Open Practice ? Opening Up Research and Data ? The Sci-AI Platform: Enabling Literature-Based Discovery ? Perspectives on Peer Review ? Altmetrics: Where Are We Now and Where Are We Headed Next? ? Technology and Tools for Academic Library Teams ? Building Public Participation in Research ? Tips, Tools, and Tactics for Managing Digital Projects in Research and in the Classroom ? Software Citation: Principles, Usage, Benefits, and Challenges ? AuthorCarpentry: A Hands-on Approach to Open Authorship and Publishing ? Applying Design Thinking and User Research to the Scholarly Communication Problem Space ? Identifying How Scientific Papers Are Shared and Who Is Sharing Them on Twitter ? Using the Open Science Framework To Increase Openness and Reproducibility in Research ? Using Wikidata in Research and Curation ? Using New Metrics: A Practical Guide to Increasing the Impact of Research ? How Universities Can Create an Open Access Culture ? Walking the Line Between Advocacy and Activism in Scholarly Communications *WHO SHOULD ATTEND* FSCI is intended for anybody who is interested in the developing new world of Scholarly Communication: researchers, librarians, publishers, university and research administration, funders, students, and post docs. There are courses for those who know very little about the current trends and technologies, as well as courses for those who are interested in more advanced topics. Our courses cover Scholarly Communication from a variety of disciplinary and regional and national perspectives. We offer courses that will be of interest to the scientist, the social scientist, and the humanities researcher. There are courses for those who manage, organise, and publish research as well as for the researchers themselves and end-users. http://WWW.FORCE11.ORG/FSCI https://www.force11.org/fsci/promotion -- ____________________________________________________________________ __________ Stefanie Haustein Postdoctoral Researcher Canada Research Chair on the Transformations of Scholarly Communication ?cole de biblioth?conomie et des sciences de l?information (EBSI) Universit? de Montr?al e-mail: haustein.stefanie at gmail.com | stefanie.haustein at umontreal.ca web: stefaniehaustein.com | crc.ebsi.umontreal.ca Twitter: @stefhaustein -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at gv.mpg.de Wed Jun 7 12:05:27 2017 From: lutz.bornmann at gv.mpg.de (Bornmann, Lutz) Date: Wed, 7 Jun 2017 16:05:27 +0000 Subject: [Sigmetrics] CRExplorer In-Reply-To: <26D4503C9B0C8B43A20B92EF238B98AEB4B83961@UM-EXCDAG-A01.um.gwdg.de> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B83961@UM-EXCDAG-A01.um.gwdg.de> Message-ID: <26D4503C9B0C8B43A20B92EF238B98AEB4B848EE@UM-EXCDAG-A01.um.gwdg.de> Dear colleague, We published a new version of the CRExplorer (www.crexplorer.net). The CRExplorer uses data from Web of Science (Clarivate Analytics) or Scopus (Elsevier) as input. CRExplorer can be applied for three main objectives: (1) the detection of the knowledge basis (i.e. the origins and historical roots) of research topics, (2) the investigation of influential works published more recently, and (3) the disambiguation of cited references data. CRExplorer version 1.7.5 was released on May 31, 2017. This version includes the following new features and improvements: * Citing Publications: Users can inspect the list of citing publications for selected cited references via "View" - "Citing Publications". * Searching: Users can do keyword searches for cited references (including wildcards such as *). * Indicators: Three indicators are included which show in how many citing years the cited publication (cited reference) belongs to the 50%, 25%, or 10% most cited publications ? compared to all other cited publications (cited references) which have been appeared in the same cited year. * CSV output formats: CRExplorer offers different CSV-based output formats (graph data, cited references and/or citing publications) * Copy + Paste: Users can copy selected cited references to clipboard (Ctrl+C) and paste it in other programs (e.g., Excel). * New Chart layout: Beside the standard chart (JFreeChart), CRExplorer now employs a new web-based, interactive chart type (HighCharts). Users can switch between the types in the "File" - "Settings" menu. * User Interface: CRExplorer's GUI is now based on JavaFX. * Handbook: A handbook is available which explains the elements and functions of the program. Best Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 Mobil: +49 170 9183667 Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From haustein.stefanie at gmail.com Wed Jun 7 13:49:56 2017 From: haustein.stefanie at gmail.com (Stefanie Haustein) Date: Wed, 07 Jun 2017 17:49:56 +0000 Subject: [Sigmetrics] Call for contributions: altmetrics17 Message-ID: ***Apologies for cross-posting*** *altmetrics17. The dependencies of altmetrics* altmetrics17 is part of the altmetrics workshop series organized since 2011 and will take place in conjunction with the 4th Altmetrics Conference (4:AM) , at Ryerson University in Toronto on *26 September 2017*. This year?s workshop will focus on the dependencies of altmetrics. Altmetrics are heavily shaped, if not completely driven, by data availability, technical affordances of underlying platforms and data providers. Against this background, the altmetrics17 workshop will have a special focus on the dependencies of altmetrics and their potential effects on altmetric research, the role of altmetrics in research evaluation and the effects on scholarly communication in general. The workshop particularly invites contributions that address the workshop?s theme directly or indirectly, analyze effects of the dependencies, and propose solutions and alternative frameworks in which to study altmetrics. *Call for contributions* We are soliciting empirical and theoretical contributions for short presentations and as a basis for discussions, which will be the main focus of the altmetrics17 workshop. Submissions can focus on empirical analyses, novel theoretical frameworks, original datasets or represent a position paper. The goal of the workshop is to discuss, exchange and foster collaboration on altmetrics between researchers and practitioners. Contributions will be curated by the altmetrics17 committee for their relevance and technical soundness and selected for short presentations. *How to submit* Please provide an extended abstract (max 1,000 words) presenting your altmetrics research contribution and highlighting particular issues you would like to discuss with other workshop participants. Abstracts need to be submitted via EasyChair by *31 July 2017*. Please include a link to any relevant artifact (e.g., a dataset, plots, slidedeck) you wish to present and discuss, after archiving it via an appropriate repository (e.g., Dryad, figshare, GitHub, SlideShare, etc.). More information can be found on the altmetrics17 website and on Twitter . -- ____________________________________________________________________ __________ Stefanie Haustein Postdoctoral Researcher Canada Research Chair on the Transformations of Scholarly Communication ?cole de biblioth?conomie et des sciences de l?information (EBSI) Universit? de Montr?al e-mail: haustein.stefanie at gmail.com | stefanie.haustein at umontreal.ca web: stefaniehaustein.com | crc.ebsi.umontreal.ca Twitter: @stefhaustein -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at leydesdorff.net Thu Jun 8 00:15:57 2017 From: loet at leydesdorff.net (Loet Leydesdorff) Date: Thu, 8 Jun 2017 06:15:57 +0200 Subject: [Sigmetrics] =?utf-8?q?=E2=80=9CEuropean_Paradox_or_Delusion-Are_?= =?utf-8?q?European_Science_and_Economy_Outdated=3F?= In-Reply-To: <10443d7f-6963-010a-ef15-130e3b130e8d@cox.net> References: <10443d7f-6963-010a-ef15-130e3b130e8d@cox.net> Message-ID: Dear Fran, Thank you for the paper. It is interesting, but raises the following question. Given the economic integration within the EU, one can meaningfully speak about a "European economy." However, a European science or publication system cannot be inferred. (For example: Is the European Union Becoming a Single Publication System? *Scientometrics* 47(2) (2000) 265-280.) Of course, one can study the European set as if it were a system, but policy implications can then easily be mistaken.Some nations are doing better than the USA, but in specific domains. (As you indicate for China.) Do I misread? Best, Loet On Sun, Jun 4, 2017 at 1:23 AM, Francis Narin wrote: > Dear Colleagues, > > Alonso Rodriguez-Navarro and I have just published a paper with, we feel, > significant EU Science Policy implications. Copies are available from > either of us. The paper is > > > ?European Paradox or Delusion-Are European Science and Economy > Outdated? Alonso Rodriguez-Navarro and Francis Narin, *Science and Public > Policy*, 2016, pp 1-10." > > > Francis Narin > > Narinf at cox.net > > > Alonso Rodriguez-Navarro > > alonso.rodriguez at upm.es > > _______________________________________________ > SIGMETRICS mailing list > SIGMETRICS at mail.asis.org > http://mail.asis.org/mailman/listinfo/sigmetrics > > -- Loet Leydesdorff Professor Emeritus, University of Amsterdam Amsterdam School of Communications Research (ASCoR) loet at leydesdorff.net; http://www.leydesdorff.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.herrmannova at gmail.com Fri Jun 9 16:16:15 2017 From: d.herrmannova at gmail.com (Drahomira Herrmannova) Date: Fri, 9 Jun 2017 16:16:15 -0400 Subject: [Sigmetrics] Proceedings of the 1st Workshop on Scholarly Web Mining (SWM 2017) Message-ID: Dear members of the Sigmetrics mailing list, We are delighted to share with you the recently published proceedings of the 1st Workshop on Scholarly Web Mining, which can be accessed at http://dl.acm.org/citation.cfm?id=3057148. The workshop took place in conjunction with the 2017 Web Search and Data Mining Conference (http://www.wsdm-conference.org/2017/) in Cambridge, United Kingdom. The aim of the workshop was to bring together people from different backgrounds interested in analyzing and mining scholarly data available via web and social media sources using various approaches such as query log mining, graph analysis, text mining, etc. The program included presentations on community detection, subject classification, recommender systems for scholarly publications and research evaluation. We hope that you will find the publications interesting. Kind regards, --- Drahomira Herrmannova ASTRO Intern @ Oak Ridge National Laboratory, TN, USA PhD Student @ Knowledge Media Institute, The Open University, UK http://drahomira.net From loet at leydesdorff.net Sun Jun 11 08:02:34 2017 From: loet at leydesdorff.net (Loet Leydesdorff) Date: Sun, 11 Jun 2017 14:02:34 +0200 Subject: [Sigmetrics] Economic and Technological Complexity Message-ID: <003701d2e2aa$9ca8edc0$d5fac940$@leydesdorff.net> Inga A. Ivanova, ?ivind Strand, Duncan Kushnir, and Loet Leydesdorff, Economic and Technological Complexity: A Model Study of Indicators of Knowledge-based Innovation Systems , Technological Forecasting and Social Change 120 (July 2017) 77-89; doi: 10.1016/j.techfore.2017.04.007 . Free access until July 26, 2017 at https://authors.elsevier.com/a/1VANd98SGayOP Highlights ? Patent Complexity index (PatCI) is introduced. ? Patent complexity for 45 countries for 2000?2014 is estimated. ? Interaction between Economic Complexity and Patent Complexity generates Triple Helix Complexity Index (THCI). ? A new method of measuring complexity is proposed. ? Complexity of an economy during the period 2000?2014 for the 45 countries is measured. * Apologies for cross-postings _____ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communication Research (ASCoR) loet at leydesdorff.net ; http://www.leydesdorff.net/ Associate Faculty, SPRU, University of Sussex; Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing; Visiting Fellow, Birkbeck, University of London; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philipp.Mayr-Schlegel at gesis.org Mon Jun 19 06:05:52 2017 From: Philipp.Mayr-Schlegel at gesis.org (Mayr-Schlegel, Philipp) Date: Mon, 19 Jun 2017 10:05:52 +0000 Subject: [Sigmetrics] CFP: Special Issue on "Bibliometric-enhanced Information Retrieval" in the Springer journal Scientometrics Message-ID: == Open Call for Papers == As we have announced at the 5th International Workshop on Bibliometric-enhanced Information Retrieval (BIR2017) @ECIR2017 we are preparing a special issue on "Bibliometric-enhanced IR" in the Springer journal Scientometrics. See === Important Dates for the Special Issue === - Paper submission deadline: June 30, 2017 - First notification: July 30, 2017 - Revision submission: August 30, 2017 - Second notification: September 30, 2017 - Final version submission: October 30, 2017 === Introduction === Bibliometric techniques are not yet widely used to enhance retrieval processes in search systems, although they offer value-added effects for users. In this workshop series we explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of coauthorship network, or simple citation graphs, can improve retrieval services for specific communities, as well as for large, cross-domain collections like Mendeley. This workshop series aims to raise awareness of the missing link between Information Retrieval (IR) and bibliometrics/scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the scholarly search engine interface. See proceedings of the former BIR workshops at ECIR 2014 , ECIR 2015 , ECIR 2016 , JCDL 2016 and ECIR 2017 . === Topics === To support the previously described goals the special issue topics include (but are not limited to) the following: - IR for digital libraries and scientific information portals - IR for scientific domains, e.g. social sciences, life sciences etc. - Information Seeking Behaviour - Bibliometrics, citation analysis and network analysis for IR - Query expansion and relevance feedback approaches - Science Modelling (both formal and empirical) - Task based user modelling, interaction, and personalisation - (Long-term) Evaluation methods and test collection design - Collaborative information handling and information sharing - Classification, categorisation and clustering approaches - Information extraction (including topic detection, entity and relation extraction) - Recommendations based on explicit and implicit user feedback - Recommendation for scholarly papers, reviewers, citations and publication venues - (Social) Book Search - Information extraction (including topic detection, entity and relation extraction) === Submission Details === Authors of accepted papers at the workshop are invited to submit extended versions to a Special Issue on "Bibliometric-enhanced IR" to be published in the journal Scientometrics . This call for papers is moreover open for other authors which have not attended the workshop but working on "Bibliometric-enhanced IR" topics (see list of topics above). Before submitting, authors need to consult the "Authors instructions" page on the Springer website. If you submit your paper to Scientometrics please select the following article type "S.I. : IR-2017". For any question please contact the workshop organizers (main contact: philipp.mayr(at)gesis(dot)org). Best regards, Philipp -- Dr. Philipp Mayr Team Leader GESIS - Leibniz Institute for the Social Sciences Unter Sachsenhausen 6-8, D-50667 K?ln, Germany Tel: + 49 (0) 221 / 476 94 -533 Email: philipp.mayr at gesis.org Web: http://www.gesis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.thelwall at blueyonder.co.uk Tue Jun 20 03:57:52 2017 From: m.thelwall at blueyonder.co.uk (thelwall mike) Date: Tue, 20 Jun 2017 08:57:52 +0100 (BST) Subject: [Sigmetrics] Three funded scientometrics/altmetrics/statistics PhDs Message-ID: <1562275026.1881660.1497945472235.JavaMail.open-xchange@oxbe17.tb.ukmail.iss.as9143.net> Three funded scientometrics/altmetrics/statistics PhDs are being offered at the University of Wolverhampton, UK: http://www.jobs.ac.uk/job/BCE955/phd-studentship-in-open-research-data-metrics http://www.jobs.ac.uk/job/BCE949/phd-studentship http://www.jobs.ac.uk/job/BCE950/phd-studentship-in-foundational-statistics -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at gv.mpg.de Wed Jun 21 08:26:22 2017 From: lutz.bornmann at gv.mpg.de (Bornmann, Lutz) Date: Wed, 21 Jun 2017 12:26:22 +0000 Subject: [Sigmetrics] New paper Message-ID: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> Dear colleague, You might be interested in the following paper: Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered. Available at: https://arxiv.org/abs/1706.06515 Best, Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 Mobil: +49 170 9183667 Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.gunn at gmail.com Wed Jun 21 15:28:03 2017 From: william.gunn at gmail.com (William Gunn) Date: Wed, 21 Jun 2017 12:28:03 -0700 Subject: [Sigmetrics] New paper In-Reply-To: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> Message-ID: Hi Lutz, I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report. I had a few questions & would be grateful for any response: The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors? Other things that occurred to me during reading: Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited? Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously? Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2? Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall? Would you expect to see the same results using CiteScore? Again, grateful for any response! William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz wrote: > Dear colleague, > > > > You might be interested in the following paper: > > > > Can the Journal Impact Factor Be Used as a Criterion for the Selection of > Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data > > > > Early in researchers' careers, it is difficult to assess how good their > work is or how important or influential the scholars will eventually be. > Hence, funding agencies, academic departments, and others often use the > Journal Impact Factor (JIF) of where the authors have published to assess > their work and provide resources and rewards for future work. The use of > JIFs in this way has been heavily criticized, however. Using a large data > set with many thousands of publication profiles of individual researchers, > this study tests the ability of the JIF (in its normalized variant) to > identify, at the beginning of their careers, those candidates who will be > successful in the long run. Instead of bare JIFs and citation counts, the > metrics used here are standardized according to Web of Science subject > categories and publication years. The results of the study indicate that > the JIF (in its normalized variant) is able to discriminate between > researchers who published papers later on with a citation impact above or > below average in a field and publication year - not only in the short term, > but also in the long term. However, the low to medium effect sizes of the > results also indicate that the JIF (in its normalized variant) should not > be used as the sole criterion for identifying later success: other > criteria, such as the novelty and significance of the specific research, > academic distinctions, and the reputation of previous institutions, should > also be considered. > > > > Available at: https://arxiv.org/abs/1706.06515 > > > > Best, > > > > Lutz > > > > --------------------------------------- > > > > Dr. Dr. habil. Lutz Bornmann > > Division for Science and Innovation Studies > > Administrative Headquarters of the Max Planck Society > > Hofgartenstr. 8 > > 80539 Munich > > Tel.: +49 89 2108 1265 <+49%2089%2021081265> > > Mobil: +49 170 9183667 <+49%20170%209183667> > > Email: bornmann at gv.mpg.de > > WWW: www.lutz-bornmann.de > > ResearcherID: http://www.researcherid.com/rid/A-3926-2008 > > ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann > > > > _______________________________________________ > SIGMETRICS mailing list > SIGMETRICS at mail.asis.org > http://mail.asis.org/mailman/listinfo/sigmetrics > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at gv.mpg.de Thu Jun 22 04:32:57 2017 From: lutz.bornmann at gv.mpg.de (Bornmann, Lutz) Date: Thu, 22 Jun 2017 08:32:57 +0000 Subject: [Sigmetrics] New paper In-Reply-To: References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> Message-ID: <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> Dear William, Many thanks for your comments! Please find my answers below: From: William Gunn [mailto:william.gunn at gmail.com] Sent: Wednesday, June 21, 2017 9:28 PM To: Bornmann, Lutz Cc: SCISIP at LISTSERV.NSF.GOV; SIGMETRICS (sigmetrics at mail.asis.org) Subject: Re: [Sigmetrics] New paper Hi Lutz, I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report. I had a few questions & would be grateful for any response: The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors? It would be definitively interesting to undertake follow-up studies (and to consider further variables, such as institutions or disciplines). These can (will) be done by ourselves, but I hope that other people will do this, too. Other things that occurred to me during reading: Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited? In the paper, we explain this as follows: ?RID provides a possible solution to the author ambiguity problem within the scientific community. The problem of polysemy means, in this context, that multiple authors are merged in a single identifier; the problem of synonymy entails multiple identifiers being available for a single author (Boyack, Klavans, Sorensen, & Ioannidis, 2013). Each researcher is assigned a unique identifier in order to manage his or her publication list. The difference between this and similar services provided by Elsevier within the Scopus database is that Elsevier automatically manages the publication profiles of researchers (authors), with the profiles being able to be manually revised. With RID, researchers themselves take the initiative, create a profile, and manage their publication lists. Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)?. Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously? We will publish further results with additional data. It would be definitively interesting to classify the researchers into different groups (as you recommend). Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2? Yes, this is correct. Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall? An interesting interpretation! Your interpretation might be correct, if the impact of publications is mainly triggered by the authors? names and less by the content of the single papers and/or if the authors have published several similar papers which can be simultaneously cited. Would you expect to see the same results using CiteScore? Yes, both metrics measure journal impact similarly. The most important thing is to use these metrics in normalized variants. Again, grateful for any response! William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz > wrote: Dear colleague, You might be interested in the following paper: Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered. Available at: https://arxiv.org/abs/1706.06515 Best, Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 Mobil: +49 170 9183667 Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann _______________________________________________ SIGMETRICS mailing list SIGMETRICS at mail.asis.org http://mail.asis.org/mailman/listinfo/sigmetrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at leydesdorff.net Thu Jun 22 06:02:48 2017 From: loet at leydesdorff.net (Loet Leydesdorff) Date: Thu, 22 Jun 2017 12:02:48 +0200 Subject: [Sigmetrics] New paper In-Reply-To: <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> Message-ID: <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> Dear Lutz, The inference from the journal level to the individual remains vulnerable as an ecological fallacy: 1. One cannot conclude from correlations to causality; 2. Should the ANOVA not be Bonferroni-corrected? These weak correlations may be non-signifcant. 3. Are you able to specify the chance that the prediction is wrong in an individual case (like a hiring decision)? Best, Loet _____ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communication Research (ASCoR) loet at leydesdorff.net ; http://www.leydesdorff.net/ Associate Faculty, SPRU, University of Sussex; Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing; Visiting Fellow, Birkbeck, University of London; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en From: SIGMETRICS [mailto:sigmetrics-bounces at asist.org] On Behalf Of Bornmann, Lutz Sent: Thursday, June 22, 2017 10:33 AM To: William Gunn Cc: SCISIP at listserv.nsf.gov; Richard Williams ; SIGMETRICS (sigmetrics at mail.asis.org) Subject: Re: [Sigmetrics] New paper Dear William, Many thanks for your comments! Please find my answers below: From: William Gunn [mailto:william.gunn at gmail.com] Sent: Wednesday, June 21, 2017 9:28 PM To: Bornmann, Lutz Cc: SCISIP at LISTSERV.NSF.GOV ; SIGMETRICS (sigmetrics at mail.asis.org ) Subject: Re: [Sigmetrics] New paper Hi Lutz, I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report. I had a few questions & would be grateful for any response: The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors? It would be definitively interesting to undertake follow-up studies (and to consider further variables, such as institutions or disciplines). These can (will) be done by ourselves, but I hope that other people will do this, too. Other things that occurred to me during reading: Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited? In the paper, we explain this as follows: ?RID provides a possible solution to the author ambiguity problem within the scientific community. The problem of polysemy means, in this context, that multiple authors are merged in a single identifier; the problem of synonymy entails multiple identifiers being available for a single author (Boyack, Klavans, Sorensen, & Ioannidis, 2013). Each researcher is assigned a unique identifier in order to manage his or her publication list. The difference between this and similar services provided by Elsevier within the Scopus database is that Elsevier automatically manages the publication profiles of researchers (authors), with the profiles being able to be manually revised. With RID, researchers themselves take the initiative, create a profile, and manage their publication lists. Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)?. Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously? We will publish further results with additional data. It would be definitively interesting to classify the researchers into different groups (as you recommend). Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2? Yes, this is correct. Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall? An interesting interpretation! Your interpretation might be correct, if the impact of publications is mainly triggered by the authors? names and less by the content of the single papers and/or if the authors have published several similar papers which can be simultaneously cited. Would you expect to see the same results using CiteScore? Yes, both metrics measure journal impact similarly. The most important thing is to use these metrics in normalized variants. Again, grateful for any response! William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz > wrote: Dear colleague, You might be interested in the following paper: Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered. Available at: https://arxiv.org/abs/1706.06515 Best, Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 Mobil: +49 170 9183667 Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann _______________________________________________ SIGMETRICS mailing list SIGMETRICS at mail.asis.org http://mail.asis.org/mailman/listinfo/sigmetrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.gunn at gmail.com Thu Jun 22 15:37:36 2017 From: william.gunn at gmail.com (William Gunn) Date: Thu, 22 Jun 2017 12:37:36 -0700 Subject: [Sigmetrics] New paper In-Reply-To: <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> Message-ID: Thanks very much for the responses. One follow-up, if I may. You state: Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)?. But I don't see any evidence for the assertion that the lists will probably be more reliable. I'm asking because it seems rather counterintuitive that an automatically generated list that can be edited by an author would be better than a list manually created by an author. Indeed, at Mendeley we have author profiles that are manually created & we're moving to automatically adding publications to them, using Scopus, because the lists are often incomplete. William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Jun 22, 2017 1:34 AM, "Bornmann, Lutz" wrote: Dear William, Many thanks for your comments! Please find my answers below: *From:* William Gunn [mailto:william.gunn at gmail.com] *Sent:* Wednesday, June 21, 2017 9:28 PM *To:* Bornmann, Lutz *Cc:* SCISIP at LISTSERV.NSF.GOV; SIGMETRICS (sigmetrics at mail.asis.org) *Subject:* Re: [Sigmetrics] New paper Hi Lutz, I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report. I had a few questions & would be grateful for any response: The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors? It would be definitively interesting to undertake follow-up studies (and to consider further variables, such as institutions or disciplines). These can (will) be done by ourselves, but I hope that other people will do this, too. Other things that occurred to me during reading: Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited? In the paper, we explain this as follows: ?RID provides a possible solution to the author ambiguity problem within the scientific community. The problem of polysemy means, in this context, that multiple authors are merged in a single identifier; the problem of synonymy entails multiple identifiers being available for a single author (Boyack, Klavans, Sorensen, & Ioannidis, 2013). Each researcher is assigned a unique identifier in order to manage his or her publication list. The difference between this and similar services provided by Elsevier within the Scopus database is that Elsevier automatically manages the publication profiles of researchers (authors), with the profiles being able to be manually revised. With RID, researchers themselves take the initiative, create a profile, and manage their publication lists. Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)?. Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously? We will publish further results with additional data. It would be definitively interesting to classify the researchers into different groups (as you recommend). Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2? Yes, this is correct. Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall? An interesting interpretation! Your interpretation might be correct, if the impact of publications is mainly triggered by the authors? names and less by the content of the single papers and/or if the authors have published several similar papers which can be simultaneously cited. Would you expect to see the same results using CiteScore? Yes, both metrics measure journal impact similarly. The most important thing is to use these metrics in normalized variants. Again, grateful for any response! William Gunn +1 (650) 614-1749 <(650)%20614-1749> http://synthesis.williamgunn.org/about/ On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz wrote: Dear colleague, You might be interested in the following paper: Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered. Available at: https://arxiv.org/abs/1706.06515 Best, Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 <+49%2089%2021081265> Mobil: +49 170 9183667 <+49%20170%209183667> Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann _______________________________________________ SIGMETRICS mailing list SIGMETRICS at mail.asis.org http://mail.asis.org/mailman/listinfo/sigmetrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at gv.mpg.de Mon Jun 26 02:37:54 2017 From: lutz.bornmann at gv.mpg.de (Bornmann, Lutz) Date: Mon, 26 Jun 2017 06:37:54 +0000 Subject: [Sigmetrics] New paper In-Reply-To: <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> Message-ID: <26D4503C9B0C8B43A20B92EF238B98AEB4BA4EAC@UM-EXCDAG-A01.um.gwdg.de> Dear Loet and William, We still have your open questions/comments to our study ?Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data? (https://arxiv.org/abs/1706.06515). Comment by William: But I don't see any evidence for the assertion that the lists will probably be more reliable. I'm asking because it seems rather counterintuitive that an automatically generated list that can be edited by an author would be better than a list manually created by an author. Indeed, at Mendeley we have author profiles that are manually created & we're moving to automatically adding publications to them, using Scopus, because the lists are often incomplete. Answer: The problem is that many Scopus profiles are not edited by the authors. In my opinion, it would be helpful if Elsevier provides the information whether a publication list had been manually (and continuously) edited or not. 1. Comment by Loet: One cannot conclude from correlations to causality. Answer: There could be causal relationship, along the lines of the Matthew effect ? those who have early success are given more (resources, grants, good students, whatever) which makes them have even more success later. Or, it could be a spurious relationship ? the qualities that make people publish in top journals early on may cause them to publish successfully later (in terms of citations). But, even if a relationship is spurious, it doesn?t mean that it can?t be used for selection and prediction. (e.g. if your big toe starts hurting and then it rains, that doesn?t mean your toe caused it to rain! But the same atmospheric relationships that caused it rain may have caused your toe to hurt ? so your toe can be a good predictor of the weather even if the relationship isn?t causal.) 2. Comment by Loet: Should the ANOVA not be Bonferroni-corrected? These weak correlations may be non-signifcant. Answer: This correction is as a rule necessary in multiple, pair-wise comparisons which might follow the ANOVA. However, we abstained from calculating these comparisons. Even if we took the unusual and questionable step of applying Bonferroni, the results would continue to be statistically highly significant. 3. Comment by Loet: Are you able to specify the chance that the prediction is wrong in an individual case (like a hiring decision)? Answer: We are certainly not saying these relationships are deterministic. While early success is correlated with later success, we do not say it guarantees it, and we caution against only relying on the JIF. From: loet at leydesdorff.net [mailto:leydesdorff at gmail.com] On Behalf Of Loet Leydesdorff Sent: Thursday, June 22, 2017 12:03 PM To: Bornmann, Lutz; 'William Gunn' Cc: SCISIP at listserv.nsf.gov; 'Richard Williams'; sigmetrics at mail.asis.org Subject: RE: [Sigmetrics] New paper Dear Lutz, The inference from the journal level to the individual remains vulnerable as an ecological fallacy: 1. One cannot conclude from correlations to causality; 2. Should the ANOVA not be Bonferroni-corrected? These weak correlations may be non-signifcant. 3. Are you able to specify the chance that the prediction is wrong in an individual case (like a hiring decision)? Best, Loet ________________________________ Loet Leydesdorff Professor, University of Amsterdam Amsterdam School of Communication Research (ASCoR) loet at leydesdorff.net ; http://www.leydesdorff.net/ Associate Faculty, SPRU, University of Sussex; Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing; Visiting Fellow, Birkbeck, University of London; http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en From: SIGMETRICS [mailto:sigmetrics-bounces at asist.org] On Behalf Of Bornmann, Lutz Sent: Thursday, June 22, 2017 10:33 AM To: William Gunn > Cc: SCISIP at listserv.nsf.gov; Richard Williams >; SIGMETRICS (sigmetrics at mail.asis.org) > Subject: Re: [Sigmetrics] New paper Dear William, Many thanks for your comments! Please find my answers below: From: William Gunn [mailto:william.gunn at gmail.com] Sent: Wednesday, June 21, 2017 9:28 PM To: Bornmann, Lutz Cc: SCISIP at LISTSERV.NSF.GOV; SIGMETRICS (sigmetrics at mail.asis.org) Subject: Re: [Sigmetrics] New paper Hi Lutz, I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report. I had a few questions & would be grateful for any response: The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors? It would be definitively interesting to undertake follow-up studies (and to consider further variables, such as institutions or disciplines). These can (will) be done by ourselves, but I hope that other people will do this, too. Other things that occurred to me during reading: Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited? In the paper, we explain this as follows: ?RID provides a possible solution to the author ambiguity problem within the scientific community. The problem of polysemy means, in this context, that multiple authors are merged in a single identifier; the problem of synonymy entails multiple identifiers being available for a single author (Boyack, Klavans, Sorensen, & Ioannidis, 2013). Each researcher is assigned a unique identifier in order to manage his or her publication list. The difference between this and similar services provided by Elsevier within the Scopus database is that Elsevier automatically manages the publication profiles of researchers (authors), with the profiles being able to be manually revised. With RID, researchers themselves take the initiative, create a profile, and manage their publication lists. Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)?. Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously? We will publish further results with additional data. It would be definitively interesting to classify the researchers into different groups (as you recommend). Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2? Yes, this is correct. Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall? An interesting interpretation! Your interpretation might be correct, if the impact of publications is mainly triggered by the authors? names and less by the content of the single papers and/or if the authors have published several similar papers which can be simultaneously cited. Would you expect to see the same results using CiteScore? Yes, both metrics measure journal impact similarly. The most important thing is to use these metrics in normalized variants. Again, grateful for any response! William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz > wrote: Dear colleague, You might be interested in the following paper: Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered. Available at: https://arxiv.org/abs/1706.06515 Best, Lutz --------------------------------------- Dr. Dr. habil. Lutz Bornmann Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8 80539 Munich Tel.: +49 89 2108 1265 Mobil: +49 170 9183667 Email: bornmann at gv.mpg.de WWW: www.lutz-bornmann.de ResearcherID: http://www.researcherid.com/rid/A-3926-2008 ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann _______________________________________________ SIGMETRICS mailing list SIGMETRICS at mail.asis.org http://mail.asis.org/mailman/listinfo/sigmetrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.gunn at gmail.com Mon Jun 26 14:37:16 2017 From: william.gunn at gmail.com (William Gunn) Date: Mon, 26 Jun 2017 11:37:16 -0700 Subject: [Sigmetrics] New paper In-Reply-To: <26D4503C9B0C8B43A20B92EF238B98AEB4BA4EAC@UM-EXCDAG-A01.um.gwdg.de> References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> <26D4503C9B0C8B43A20B92EF238B98AEB4BA4EAC@UM-EXCDAG-A01.um.gwdg.de> Message-ID: Please see my comments below. On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz wrote: > Comment by William: But I don't see any evidence for the assertion that > the lists will probably be more reliable. I'm asking because it seems > rather counterintuitive that an automatically generated list that can be > edited by an author would be better than a list manually created by an > author. Indeed, at Mendeley we have author profiles that are manually > created & we're moving to automatically adding publications to them, using > Scopus, because the lists are often incomplete. > > > > Answer: The problem is that many Scopus profiles are not edited by the > authors. In my opinion, it would be helpful if Elsevier provides the > information whether a publication list had been manually (and continuously) > edited or not. > > > Thanks for the response, but I'm asking what evidence there is that a collection of manually created profiles will be more accurate than an automatically generated one. Errors do exist in automatically generated profiles, but they also exist in manually created ones. The question is which has more errors per profile, and at the level of the entire collection, which are more complete and correct. It seems like you're assuming that manually created ones will be both more complete and correct, whereas at Mendeley we have evidence that that's not a valid assumption. Therefore, any evidence you have to justify your assumption would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at gv.mpg.de Tue Jun 27 00:16:18 2017 From: lutz.bornmann at gv.mpg.de (Bornmann, Lutz) Date: Tue, 27 Jun 2017 04:16:18 +0000 Subject: [Sigmetrics] New paper In-Reply-To: References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> <26D4503C9B0C8B43A20B92EF238B98AEB4BA4EAC@UM-EXCDAG-A01.um.gwdg.de>, Message-ID: <261BBCFC-D542-4865-89A0-0712D008E159@gv.mpg.de> It would be definitely interesting to study empirically the quality of available publications lists. However, it is best practice in bibliometrics that publication lists of single researchers which are used for research evaluation purposes are validated by the researchers themselves. Thus, I expect higher quality lists from databases for which I know that researchers have produced/ controlled their lists. Von meinem iPad gesendet Am 26.06.2017 um 20:37 schrieb William Gunn >: Please see my comments below. On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > wrote: Comment by William: But I don't see any evidence for the assertion that the lists will probably be more reliable. I'm asking because it seems rather counterintuitive that an automatically generated list that can be edited by an author would be better than a list manually created by an author. Indeed, at Mendeley we have author profiles that are manually created & we're moving to automatically adding publications to them, using Scopus, because the lists are often incomplete. Answer: The problem is that many Scopus profiles are not edited by the authors. In my opinion, it would be helpful if Elsevier provides the information whether a publication list had been manually (and continuously) edited or not. Thanks for the response, but I'm asking what evidence there is that a collection of manually created profiles will be more accurate than an automatically generated one. Errors do exist in automatically generated profiles, but they also exist in manually created ones. The question is which has more errors per profile, and at the level of the entire collection, which are more complete and correct. It seems like you're assuming that manually created ones will be both more complete and correct, whereas at Mendeley we have evidence that that's not a valid assumption. Therefore, any evidence you have to justify your assumption would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.gunn at gmail.com Tue Jun 27 00:20:10 2017 From: william.gunn at gmail.com (William Gunn) Date: Mon, 26 Jun 2017 21:20:10 -0700 Subject: [Sigmetrics] New paper In-Reply-To: References: <26D4503C9B0C8B43A20B92EF238B98AEB4B9BFB5@UM-EXCDAG-A01.um.gwdg.de> <26D4503C9B0C8B43A20B92EF238B98AEB4B9E1A7@UM-EXCDAG-A01.um.gwdg.de> <011501d2eb3e$b4c5d750$1e5185f0$@leydesdorff.net> <26D4503C9B0C8B43A20B92EF238B98AEB4BA4EAC@UM-EXCDAG-A01.um.gwdg.de> <261BBCFC-D542-4865-89A0-0712D008E159@gv.mpg.de> Message-ID: Curious how it became a best practice without empirical evidence to recommend it, but nevertheless, I think you've given me a great idea for a research project ? William Gunn +1 (650) 614-1749 http://synthesis.williamgunn.org/about/ On Jun 26, 2017 9:16 PM, "Bornmann, Lutz" wrote: It would be definitely interesting to study empirically the quality of available publications lists. However, it is best practice in bibliometrics that publication lists of single researchers which are used for research evaluation purposes are validated by the researchers themselves. Thus, I expect higher quality lists from databases for which I know that researchers have produced/ controlled their lists. Von meinem iPad gesendet Am 26.06.2017 um 20:37 schrieb William Gunn : Please see my comments below. On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz wrote: > Comment by William: But I don't see any evidence for the assertion that > the lists will probably be more reliable. I'm asking because it seems > rather counterintuitive that an automatically generated list that can be > edited by an author would be better than a list manually created by an > author. Indeed, at Mendeley we have author profiles that are manually > created & we're moving to automatically adding publications to them, using > Scopus, because the lists are often incomplete. > > > > Answer: The problem is that many Scopus profiles are not edited by the > authors. In my opinion, it would be helpful if Elsevier provides the > information whether a publication list had been manually (and continuously) > edited or not. > > > Thanks for the response, but I'm asking what evidence there is that a collection of manually created profiles will be more accurate than an automatically generated one. Errors do exist in automatically generated profiles, but they also exist in manually created ones. The question is which has more errors per profile, and at the level of the entire collection, which are more complete and correct. It seems like you're assuming that manually created ones will be both more complete and correct, whereas at Mendeley we have evidence that that's not a valid assumption. Therefore, any evidence you have to justify your assumption would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edelgado at ugr.es Thu Jun 29 08:56:01 2017 From: edelgado at ugr.es (=?UTF-8?Q?Emilio_Delgado_L=C3=B3pez-C=C3=B3zar?=) Date: Thu, 29 Jun 2017 14:56:01 +0200 Subject: [Sigmetrics] "Classic papers" a step further in the bibliometric exploitation of Google Scholar Message-ID: Dear colleagues, Google Scholar has recently launched a new product called "Classic Papers". This product displays the top 10 most cited English-language articles published in 2006 in 252 subject categories assigned by them. The total number of items shown is 2515 items. After giving a brief overview of Eugene Garfield's contributions to the issue of identifying and studying the most cited scientific articles, manifested in the creation of his Citation Classics, the main characteristics and features of this new service, as well as its main strengths and weaknesses, are addressed. You may access it from: https://doi.org/10.13140/RG.2.2.35729.22880/1 I hope you find it of interest. Kind regards Emilio Delgado L?pez-C?zar Facultad de Comunicaci?n y Documentaci?n Universidad de Granada http://scholar.google.com/citations?hl=es&user=kyTHOh0AAAAJ https://www.researchgate.net/profile/Emilio_Delgado_Lopez-Cozar http://googlescholardigest.blogspot.com.es Dubitando ad veritatem pervenimus (Cicer?n, De officiis. A. 451...) Contra facta non argumenta A fructibus eorum cognoscitis eos (San Mateo 7, 16) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Isabelle.Dorsch at uni-duesseldorf.de Fri Jun 30 05:25:13 2017 From: Isabelle.Dorsch at uni-duesseldorf.de (Isabelle Dorsch) Date: Fri, 30 Jun 2017 11:25:13 +0200 Subject: [Sigmetrics] SIGMETRICS Digest, Vol 23, Issue 8 - New paper (Bornmann, Lutz) In-Reply-To: References: Message-ID: <13d6c246cb23d8f8808e385237e6fa0d@uni-duesseldorf.de> Dear Lutz Bornmann, of course, this is an interesting research topic. I examined publication lists in databases (like WoS and Scopus) and personal publication lists by the authors themselves: https://link.springer.com/article/10.1007/s11192-017-2416-9 RELATIVE VISIBILITY OF AUTHORS? PUBLICATIONS IN DIFFERENT INFORMATION SERVICES Publication hit lists of authors, institutes, scientific disciplines etc. within scientific databases like Web of Science or Scopus are often used as a basis for scientometric analyses and evaluations of these authors, institutes etc. However, such information services do not necessarily cover all publications of an author. The purpose of this article is to introduce a re-interpreted scientometric indicator called "visibility," which is the share of the number of an author's publications on a certain information service relative to the author's entire ?uvre based upon his/her probably complete personal publication list. To demonstrate how the indicator works, scientific publications (from 2001 to 2015) of the information scientists Blaise Cronin (_N_ = 167) and Wolfgang G. Stock (_N_ = 152) were collected and compared with their publication counts in the scientific information services ACM, ECONIS, Google Scholar, IEEE Xplore, Infodata eDepot, LISTA, Scopus, and Web of Science, as well as the social media services Mendeley and ResearchGate. For almost all information services, the visibility amounts to less than 50%. The introduced indicator represents a more realistic view of an author's visibility in databases than the currently applied absolute number of hits in those databases. > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in > bibliometrics that publication lists of single researchers which are > used for research evaluation purposes are validated by the researchers > themselves. Thus, I expect higher quality lists from databases for > which I know that researchers have produced/ controlled their lists. Kind regards, Isabelle Dorsch Am 2017-06-29 14:56, schrieb sigmetrics-request at asist.org: > Send SIGMETRICS mailing list submissions to > sigmetrics at mail.asis.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.asis.org/mailman/listinfo/sigmetrics > or, via email, send a message with subject or body 'help' to > sigmetrics-request at mail.asis.org > > You can reach the person managing the list at > sigmetrics-owner at mail.asis.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SIGMETRICS digest..." > > Today's Topics: > > 1. Re: New paper (William Gunn) > 2. Re: New paper (Bornmann, Lutz) > 3. Re: New paper (William Gunn) > 4. "Classic papers" a step further in the bibliometric > exploitation of Google Scholar (Emilio Delgado L?pez-C?zar) > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 26 Jun 2017 11:37:16 -0700 > From: William Gunn > To: "Bornmann, Lutz" > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > wrote: > >> Comment by William: But I don't see any evidence for the assertion that >> the lists will probably be more reliable. I'm asking because it seems >> rather counterintuitive that an automatically generated list that can be >> edited by an author would be better than a list manually created by an >> author. Indeed, at Mendeley we have author profiles that are manually >> created & we're moving to automatically adding publications to them, using >> Scopus, because the lists are often incomplete. >> >> Answer: The problem is that many Scopus profiles are not edited by the >> authors. In my opinion, it would be helpful if Elsevier provides the >> information whether a publication list had been manually (and continuously) >> edited or not. > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically generated > profiles, but they also exist in manually created ones. The question is > which has more errors per profile, and at the level of the entire > collection, which are more complete and correct. It seems like you're > assuming that manually created ones will be both more complete and correct, > whereas at Mendeley we have evidence that that's not a valid assumption. > Therefore, any evidence you have to justify your assumption would be > appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 2 > Date: Tue, 27 Jun 2017 04:16:18 +0000 > From: "Bornmann, Lutz" > To: William Gunn > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: <261BBCFC-D542-4865-89A0-0712D008E159 at gv.mpg.de> > Content-Type: text/plain; charset="us-ascii" > > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in > bibliometrics that publication lists of single researchers which are > used for research evaluation purposes are validated by the researchers > themselves. Thus, I expect higher quality lists from databases for > which I know that researchers have produced/ controlled their lists. > > Von meinem iPad gesendet > > Am 26.06.2017 um 20:37 schrieb William Gunn > >: > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > > wrote: > Comment by William: But I don't see any evidence for the assertion > that the lists will probably be more reliable. I'm asking because it > seems rather counterintuitive that an automatically generated list > that can be edited by an author would be better than a list manually > created by an author. Indeed, at Mendeley we have author profiles that > are manually created & we're moving to automatically adding > publications to them, using Scopus, because the lists are often > incomplete. > > Answer: The problem is that many Scopus profiles are not edited by the > authors. In my opinion, it would be helpful if Elsevier provides the > information whether a publication list had been manually (and > continuously) edited or not. > > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically > generated profiles, but they also exist in manually created ones. The > question is which has more errors per profile, and at the level of the > entire collection, which are more complete and correct. It seems like > you're assuming that manually created ones will be both more complete > and correct, whereas at Mendeley we have evidence that that's not a > valid assumption. Therefore, any evidence you have to justify your > assumption would be appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 3 > Date: Mon, 26 Jun 2017 21:20:10 -0700 > From: William Gunn > To: "Bornmann, Lutz" > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Curious how it became a best practice without empirical evidence to > recommend it, but nevertheless, I think you've given me a great idea for a > research project ? > > William Gunn > +1 (650) 614-1749 > http://synthesis.williamgunn.org/about/ > > On Jun 26, 2017 9:16 PM, "Bornmann, Lutz" wrote: > > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in bibliometrics > that publication lists of single researchers which are used for research > evaluation purposes are validated by the researchers themselves. Thus, I > expect higher quality lists from databases for which I know that > researchers have produced/ controlled their lists. > > Von meinem iPad gesendet > > Am 26.06.2017 um 20:37 schrieb William Gunn : > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > wrote: > >> Comment by William: But I don't see any evidence for the assertion that >> the lists will probably be more reliable. I'm asking because it seems >> rather counterintuitive that an automatically generated list that can be >> edited by an author would be better than a list manually created by an >> author. Indeed, at Mendeley we have author profiles that are manually >> created & we're moving to automatically adding publications to them, using >> Scopus, because the lists are often incomplete. >> >> Answer: The problem is that many Scopus profiles are not edited by the >> authors. In my opinion, it would be helpful if Elsevier provides the >> information whether a publication list had been manually (and continuously) >> edited or not. > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically generated > profiles, but they also exist in manually created ones. The question is > which has more errors per profile, and at the level of the entire > collection, which are more complete and correct. It seems like you're > assuming that manually created ones will be both more complete and correct, > whereas at Mendeley we have evidence that that's not a valid assumption. > Therefore, any evidence you have to justify your assumption would be > appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 4 > Date: Thu, 29 Jun 2017 14:56:01 +0200 > From: Emilio Delgado L?pez-C?zar > To: > Subject: [Sigmetrics] "Classic papers" a step further in the > bibliometric exploitation of Google Scholar > Message-ID: > Content-Type: text/plain; charset="utf-8" > > Dear colleagues, > > Google Scholar has recently launched a new product > called "Classic Papers". This product displays the top 10 most cited > English-language articles published in 2006 in 252 subject categories > assigned by them. The total number of items shown is 2515 items. > > After > giving a brief overview of Eugene Garfield's contributions to the issue > of identifying and studying the most cited scientific articles, > manifested in the creation of his Citation Classics, the main > characteristics and features of this new service, as well as its main > strengths and weaknesses, are addressed. You may access it > from: > > https://doi.org/10.13140/RG.2.2.35729.22880/1 > > I hope you find it > of interest. > > Kind regards > > Emilio Delgado L?pez-C?zar > > Facultad de > Comunicaci?n y Documentaci?n > Universidad de > Granada > http://scholar.google.com/citations?hl=es&user=kyTHOh0AAAAJ > https://www.researchgate.net/profile/Emilio_Delgado_Lopez-Cozar > http://googlescholardigest.blogspot.com.es > > Dubitando > ad veritatem pervenimus (Cicer?n, De officiis. A. 451...) > Contra facta > non argumenta > A fructibus eorum cognoscitis eos (San Mateo 7, 16) > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SIGMETRICS mailing list > SIGMETRICS at mail.asis.org > http://mail.asis.org/mailman/listinfo/sigmetrics > > ------------------------------ > > End of SIGMETRICS Digest, Vol 23, Issue 8 > ***************************************** -- Isabelle Dorsch, B.A. Dept. of Information Science Heinrich Heine University D?sseldorf Bldg 24.53, Level 01, Room 87 Universit?tsstra?e 1 D-40225 D?sseldorf, Germany Tel. +49 211 81-10803 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Isabelle.Dorsch at uni-duesseldorf.de Fri Jun 30 05:25:13 2017 From: Isabelle.Dorsch at uni-duesseldorf.de (Isabelle Dorsch) Date: Fri, 30 Jun 2017 11:25:13 +0200 Subject: [Sigmetrics] SIGMETRICS Digest, Vol 23, Issue 8 - New paper (Bornmann, Lutz) In-Reply-To: References: Message-ID: <13d6c246cb23d8f8808e385237e6fa0d@uni-duesseldorf.de> Dear Lutz Bornmann, of course, this is an interesting research topic. I examined publication lists in databases (like WoS and Scopus) and personal publication lists by the authors themselves: https://link.springer.com/article/10.1007/s11192-017-2416-9 RELATIVE VISIBILITY OF AUTHORS? PUBLICATIONS IN DIFFERENT INFORMATION SERVICES Publication hit lists of authors, institutes, scientific disciplines etc. within scientific databases like Web of Science or Scopus are often used as a basis for scientometric analyses and evaluations of these authors, institutes etc. However, such information services do not necessarily cover all publications of an author. The purpose of this article is to introduce a re-interpreted scientometric indicator called "visibility," which is the share of the number of an author's publications on a certain information service relative to the author's entire ?uvre based upon his/her probably complete personal publication list. To demonstrate how the indicator works, scientific publications (from 2001 to 2015) of the information scientists Blaise Cronin (_N_ = 167) and Wolfgang G. Stock (_N_ = 152) were collected and compared with their publication counts in the scientific information services ACM, ECONIS, Google Scholar, IEEE Xplore, Infodata eDepot, LISTA, Scopus, and Web of Science, as well as the social media services Mendeley and ResearchGate. For almost all information services, the visibility amounts to less than 50%. The introduced indicator represents a more realistic view of an author's visibility in databases than the currently applied absolute number of hits in those databases. > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in > bibliometrics that publication lists of single researchers which are > used for research evaluation purposes are validated by the researchers > themselves. Thus, I expect higher quality lists from databases for > which I know that researchers have produced/ controlled their lists. Kind regards, Isabelle Dorsch Am 2017-06-29 14:56, schrieb sigmetrics-request at asist.org: > Send SIGMETRICS mailing list submissions to > sigmetrics at mail.asis.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.asis.org/mailman/listinfo/sigmetrics > or, via email, send a message with subject or body 'help' to > sigmetrics-request at mail.asis.org > > You can reach the person managing the list at > sigmetrics-owner at mail.asis.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SIGMETRICS digest..." > > Today's Topics: > > 1. Re: New paper (William Gunn) > 2. Re: New paper (Bornmann, Lutz) > 3. Re: New paper (William Gunn) > 4. "Classic papers" a step further in the bibliometric > exploitation of Google Scholar (Emilio Delgado L?pez-C?zar) > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 26 Jun 2017 11:37:16 -0700 > From: William Gunn > To: "Bornmann, Lutz" > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > wrote: > >> Comment by William: But I don't see any evidence for the assertion that >> the lists will probably be more reliable. I'm asking because it seems >> rather counterintuitive that an automatically generated list that can be >> edited by an author would be better than a list manually created by an >> author. Indeed, at Mendeley we have author profiles that are manually >> created & we're moving to automatically adding publications to them, using >> Scopus, because the lists are often incomplete. >> >> Answer: The problem is that many Scopus profiles are not edited by the >> authors. In my opinion, it would be helpful if Elsevier provides the >> information whether a publication list had been manually (and continuously) >> edited or not. > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically generated > profiles, but they also exist in manually created ones. The question is > which has more errors per profile, and at the level of the entire > collection, which are more complete and correct. It seems like you're > assuming that manually created ones will be both more complete and correct, > whereas at Mendeley we have evidence that that's not a valid assumption. > Therefore, any evidence you have to justify your assumption would be > appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 2 > Date: Tue, 27 Jun 2017 04:16:18 +0000 > From: "Bornmann, Lutz" > To: William Gunn > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: <261BBCFC-D542-4865-89A0-0712D008E159 at gv.mpg.de> > Content-Type: text/plain; charset="us-ascii" > > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in > bibliometrics that publication lists of single researchers which are > used for research evaluation purposes are validated by the researchers > themselves. Thus, I expect higher quality lists from databases for > which I know that researchers have produced/ controlled their lists. > > Von meinem iPad gesendet > > Am 26.06.2017 um 20:37 schrieb William Gunn > >: > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > > wrote: > Comment by William: But I don't see any evidence for the assertion > that the lists will probably be more reliable. I'm asking because it > seems rather counterintuitive that an automatically generated list > that can be edited by an author would be better than a list manually > created by an author. Indeed, at Mendeley we have author profiles that > are manually created & we're moving to automatically adding > publications to them, using Scopus, because the lists are often > incomplete. > > Answer: The problem is that many Scopus profiles are not edited by the > authors. In my opinion, it would be helpful if Elsevier provides the > information whether a publication list had been manually (and > continuously) edited or not. > > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically > generated profiles, but they also exist in manually created ones. The > question is which has more errors per profile, and at the level of the > entire collection, which are more complete and correct. It seems like > you're assuming that manually created ones will be both more complete > and correct, whereas at Mendeley we have evidence that that's not a > valid assumption. Therefore, any evidence you have to justify your > assumption would be appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 3 > Date: Mon, 26 Jun 2017 21:20:10 -0700 > From: William Gunn > To: "Bornmann, Lutz" > Cc: "SCISIP at listserv.nsf.gov" , Richard > Williams , "sigmetrics at mail.asis.org" > > Subject: Re: [Sigmetrics] New paper > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Curious how it became a best practice without empirical evidence to > recommend it, but nevertheless, I think you've given me a great idea for a > research project ? > > William Gunn > +1 (650) 614-1749 > http://synthesis.williamgunn.org/about/ > > On Jun 26, 2017 9:16 PM, "Bornmann, Lutz" wrote: > > It would be definitely interesting to study empirically the quality of > available publications lists. However, it is best practice in bibliometrics > that publication lists of single researchers which are used for research > evaluation purposes are validated by the researchers themselves. Thus, I > expect higher quality lists from databases for which I know that > researchers have produced/ controlled their lists. > > Von meinem iPad gesendet > > Am 26.06.2017 um 20:37 schrieb William Gunn : > > Please see my comments below. > > On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz > wrote: > >> Comment by William: But I don't see any evidence for the assertion that >> the lists will probably be more reliable. I'm asking because it seems >> rather counterintuitive that an automatically generated list that can be >> edited by an author would be better than a list manually created by an >> author. Indeed, at Mendeley we have author profiles that are manually >> created & we're moving to automatically adding publications to them, using >> Scopus, because the lists are often incomplete. >> >> Answer: The problem is that many Scopus profiles are not edited by the >> authors. In my opinion, it would be helpful if Elsevier provides the >> information whether a publication list had been manually (and continuously) >> edited or not. > Thanks for the response, but I'm asking what evidence there is that a > collection of manually created profiles will be more accurate than an > automatically generated one. Errors do exist in automatically generated > profiles, but they also exist in manually created ones. The question is > which has more errors per profile, and at the level of the entire > collection, which are more complete and correct. It seems like you're > assuming that manually created ones will be both more complete and correct, > whereas at Mendeley we have evidence that that's not a valid assumption. > Therefore, any evidence you have to justify your assumption would be > appreciated. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 4 > Date: Thu, 29 Jun 2017 14:56:01 +0200 > From: Emilio Delgado L?pez-C?zar > To: > Subject: [Sigmetrics] "Classic papers" a step further in the > bibliometric exploitation of Google Scholar > Message-ID: > Content-Type: text/plain; charset="utf-8" > > Dear colleagues, > > Google Scholar has recently launched a new product > called "Classic Papers". This product displays the top 10 most cited > English-language articles published in 2006 in 252 subject categories > assigned by them. The total number of items shown is 2515 items. > > After > giving a brief overview of Eugene Garfield's contributions to the issue > of identifying and studying the most cited scientific articles, > manifested in the creation of his Citation Classics, the main > characteristics and features of this new service, as well as its main > strengths and weaknesses, are addressed. You may access it > from: > > https://doi.org/10.13140/RG.2.2.35729.22880/1 > > I hope you find it > of interest. > > Kind regards > > Emilio Delgado L?pez-C?zar > > Facultad de > Comunicaci?n y Documentaci?n > Universidad de > Granada > http://scholar.google.com/citations?hl=es&user=kyTHOh0AAAAJ > https://www.researchgate.net/profile/Emilio_Delgado_Lopez-Cozar > http://googlescholardigest.blogspot.com.es > > Dubitando > ad veritatem pervenimus (Cicer?n, De officiis. A. 451...) > Contra facta > non argumenta > A fructibus eorum cognoscitis eos (San Mateo 7, 16) > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SIGMETRICS mailing list > SIGMETRICS at mail.asis.org > http://mail.asis.org/mailman/listinfo/sigmetrics > > ------------------------------ > > End of SIGMETRICS Digest, Vol 23, Issue 8 > ***************************************** -- Isabelle Dorsch, B.A. Dept. of Information Science Heinrich Heine University D?sseldorf Bldg 24.53, Level 01, Room 87 Universit?tsstra?e 1 D-40225 D?sseldorf, Germany Tel. +49 211 81-10803 -------------- next part -------------- An HTML attachment was scrubbed... URL: