How to evaluate individual researchers

Bornmann, Lutz lutz.bornmann at GV.MPG.DE
Wed Oct 16 07:25:41 EDT 2013


Dear Loet,

Many thanks for your feedback.

I think one can write a book on the topic “how to evaluate a single scientists.” In a book, one would have the room for many more aspects than we dealt with in our paper. We have concentrated on the technical aspects of research evaluation. Indeed, a second paper could follow on the many topics which you mentioned (ethical, political etc.).

Our paper focuses on publication output, success (or failure) of publishing in good journals, and citation impact. I think it is reasonable to evaluate also on the journal level – given that journal metrics are not used as proxies for the impact of single papers. There exists scientists who publish in good journals, but the impact of the single papers is not so high. For these cases, it is important to have metrics which reflect the good performance on the journal level.

It is an advantage of the Normalized Journal Position that it can be calculated (very simple) with data from JCR. Furthermore, it is one of the few journal metrics which is field-normalized. There are also interesting variants available: for example, the q1-indicator (SCImago). The advantage of q1 is that one can operate with an expected value of 25%.

Of course, our set of metrics is not “written in stone.” They are justified recommendations. One can use other metrics, especially if one is an expert like you.

Best,

Lutz

From: loet at leydesdorff.net [mailto:leydesdorff at gmail.com] On Behalf Of Loet Leydesdorff
Sent: Wednesday, October 16, 2013 9:52 AM
To: 'ASIS&T Special Interest Group on Metrics'
Cc: 'Werner Marx'; Bornmann, Lutz
Subject: RE: [SIGMETRICS] How to evaluate individual researchers

Dear Lutz and Werner,

I read your paper with interest. You plead for using a balanced set of indicators as partial perspectives. That seems all very reasonable.

A bit unexpected is your strong plea for using the Category Ranks of Thomson Reuters instead of the JIF. You call these “Normalized Journal Positions”. In the case of more than a single category for a journal, you average them. I don’t understand how one can use these measures at the journal level subsequently at the level of individual papers? As expected values against which one can test individual scores? Is there empirical evidence that this journal measure is much better than a number of others? (e.g., fractional counting, I3, eigenfactor, SJR2, SNIP2, etc.)

The title suggests also a number of ethical issues. What are the rights of the evaluated? For example, in terms of transparency of the data and methods? Is this different for public (academic and governmental) research institutions such as your own from in the case of a commercial contract on the market? In the latter case it seems up to the employer how to use the report, but in the case of academic contributions one could expect a code of conduct in relation to issues as data and transparency. How does one handle errors in the data?

Let me provide an example. A few years ago, our faculty was evaluated by CWTS in Leiden. During this process, each of us were asked to check a list provided by CWTS. I noted that one of my papers was misclassified in the data (WoS). While it was a citable item (anonymously refereed), it was classified as non-citable editorial. This could not be corrected.

I don’t mind, but if one is on tenure-track. ?? These issues also come up at institutional levels, but they are more pronounced at the individual level. What are the rights of those who are evaluated? Does one perhaps need a lawyer? Is there an appeal with the database producers and/or analysts? How does the Max Planck Society handle these issues? There may be real damages; who is accountable? We know the pitfalls of these evaluations.

Best,
Loet


From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Bornmann, Lutz
Sent: Tuesday, October 15, 2013 9:30 AM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: [SIGMETRICS] How to evaluate individual researchers

How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations
Lutz Bornmann<http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>, Werner Marx<http://arxiv.org/find/cs/1/au:+Marx_W/0/1/0/all/0/1>

Although bibliometrics has been a separate research field for many years, there is still no uniformity in the way bibliometric analyses are applied to individual researchers. Therefore, this study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences. 2005 saw the introduction of the h index, which gives information about a researcher's productivity and the impact of his or her publications in a single number (h is the number of publications with at least h citations); however, it is not possible to cover the multidimensional complexity of research performance and to undertake inter-personal comparisons with this number. This study therefore includes recommendations for a set of indicators to be used for evaluating researchers. Our proposals relate to the selection of data on which an evaluation is based, the analysis of the data and the presentation of the results.


available at:



http://arxiv.org/abs/1302.3697

---------------------------------------

Dr. Dr. habil. Lutz Bornmann
Division for Science and Innovation Studies
Administrative Headquarters of the Max Planck Society
Hofgartenstr. 8
80539 Munich
Tel.: +49 89 2108 1265
Mobil: +49 170 9183667
Email: bornmann at gv.mpg.de<mailto:bornmann at gv.mpg.de>
WWW: www.lutz-bornmann.de<http://www.lutz-bornmann.de/>
ResearcherID: http://www.researcherid.com/rid/A-3926-2008

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20131016/db80981c/attachment.html>


More information about the SIGMETRICS mailing list