How to evaluate individual researchers

Loet Leydesdorff loet at LEYDESDORFF.NET
Wed Oct 16 03:52:16 EDT 2013

Dear Lutz and Werner, 


I read your paper with interest. You plead for using a balanced set of indicators as partial perspectives. That seems all very reasonable.


A bit unexpected is your strong plea for using the Category Ranks of Thomson Reuters instead of the JIF. You call these “Normalized Journal Positions”. In the case of more than a single category for a journal, you average them. I don’t understand how one can use these measures at the journal level subsequently at the level of individual papers? As expected values against which one can test individual scores? Is there empirical evidence that this journal measure is much better than a number of others? (e.g., fractional counting, I3, eigenfactor, SJR2, SNIP2, etc.)


The title suggests also a number of ethical issues. What are the rights of the evaluated? For example, in terms of transparency of the data and methods? Is this different for public (academic and governmental) research institutions such as your own from in the case of a commercial contract on the market? In the latter case it seems up to the employer how to use the report, but in the case of academic contributions one could expect a code of conduct in relation to issues as data and transparency. How does one handle errors in the data?


Let me provide an example. A few years ago, our faculty was evaluated by CWTS in Leiden. During this process, each of us were asked to check a list provided by CWTS. I noted that one of my papers was misclassified in the data (WoS). While it was a citable item (anonymously refereed), it was classified as non-citable editorial. This could not be corrected.


I don’t mind, but if one is on tenure-track. ?? These issues also come up at institutional levels, but they are more pronounced at the individual level. What are the rights of those who are evaluated? Does one perhaps need a lawyer? Is there an appeal with the database producers and/or analysts? How does the Max Planck Society handle these issues? There may be real damages; who is accountable? We know the pitfalls of these evaluations.






From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Bornmann, Lutz
Sent: Tuesday, October 15, 2013 9:30 AM
Subject: [SIGMETRICS] How to evaluate individual researchers


How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations

 <> Lutz Bornmann,  <> Werner Marx


Although bibliometrics has been a separate research field for many years, there is still no uniformity in the way bibliometric analyses are applied to individual researchers. Therefore, this study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences. 2005 saw the introduction of the h index, which gives information about a researcher's productivity and the impact of his or her publications in a single number (h is the number of publications with at least h citations); however, it is not possible to cover the multidimensional complexity of research performance and to undertake inter-personal comparisons with this number. This study therefore includes recommendations for a set of indicators to be used for evaluating researchers. Our proposals relate to the selection of data on which an evaluation is based, the analysis of the data and the presentation of the results. 


available at:






Dr. Dr. habil. Lutz Bornmann

Division for Science and Innovation Studies

Administrative Headquarters of the Max Planck Society

Hofgartenstr. 8

80539 Munich

Tel.: +49 89 2108 1265

Mobil: +49 170 9183667

Email: bornmann at

WWW:  <>

ResearcherID:  <>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the SIGMETRICS mailing list