[Sigmetrics] Special section of Journal of Informetrics

Jesper Wiborg Schneider jws at ps.au.dk
Tue May 31 13:05:06 EDT 2016


Dear Loet,

As far as I can see, the current Leiden Ranking is not based on WoS journal subject category normalization procedures, but instead it is based on CWTS’ citing-cited clustering approach of publications, currently at 4000 micro clusters (and the indicators are also based on so-called core publications in the database and not all publications), see http://www.leidenranking.com/information/indicators.
To me it seems that Ludo and his colleagues at CWTS are indeed trying to come up with other, perhaps better, normalization solutions?

Kind regards - Jesper

_____________________

Jesper W. Schneider
Professor

Danish Centre for Studies in Research & Research Policy,
Department of Political Science
Aarhus University
Bartholins Allé 7
building 1331, room 027
DK-8000 Aarhus C
Denmark

T: +45 8716 5241
C: +45 2029 2781
M: jws at ps.au.dk<mailto:jws at ps.au.dk>
Skype: jesper.wiborg.schneider
W: http://pure.au.dk/portal/en/jws@ps.au.dk
_____________________


From: SIGMETRICS [mailto:sigmetrics-bounces at asis.org] On Behalf Of Loet Leydesdorff
Sent: 31 May 2016 18:04
To: Waltman, L.R. <waltmanlr at cwts.leidenuniv.nl>
Cc: sigmetrics at mail.asis.org
Subject: Re: [Sigmetrics] Special section of Journal of Informetrics

Dear Ludo and colleagues,

The Mean Normalized Citation Score (MNCS) was proposed by Waltman et al. (2011a and b) in response to a critique of the previous “crown indicator” (CPP/FCSm; Moed et al., 1995) of the Leiden Center for Science and Technology Studies (CWTS). The old “crown indiator” had been based on a mistake against the order of operations prescribing that one should first multiply and divide and only thereafter add and substract (Opthof & Leydesdorff, 2010; cf. Gingras & Larivière, 2011). The new “crown indicator” repaired this problem, but did not sufficiently reflect on two other problems with these “crown indicators”: (1) the use of the mean and (2) the problem of field delineation. Field delineation is needed in evaluation practices because one cannot compare citation scores in different disciplines.

1.     In a reaction to the above discussion, Bornmann & Mutz (2011) proposed to use percentile ranks as a non-parametric alternative to using the means of citation distributions for the normalization. Note that the Science and Engineering Indicators of the U.S. National Science Board have used percentile ranks (top-1%, top-10%, etc.) since decades. Citation distributions are often skewed and the use of the mean can then not be advised. At the time (2011), we joined forces in a paper entitled “Turning the Tables of Citation Analysis One More Time: Principles for comparing sets of documents,” warning, among other things, against the use of mean-based indicators as proposed by CWTS (Leydesdorff, Bornmann, Mutz, & Opthof, 2011). Indeed, the Leiden Rankings provide the top-10% as a category since 2012 (Waltman et al., 2012), but most evaluation practices are still based on MNCS.

2.     Field delineation is an unresolved problem in evaluative bibliometrics (Leydesdorff, 2008). Like its predecessor the new “crown indicator” uses the Web-of-Science Subject Categories (WCs) for “solving” this problem. However, these categories are notoriously flawed: some of them overlap more than others and journals have been incrementally categorized during decades. The system itself is a remnant of the early days of the Science Citation Index with some patchwork (Pudovkin & Garfield, 2002: 1113n). In other words, the problem is not solved: many journals are misplaced and WCs can be heterogeneous. Perhaps, the problem is not clearly solvable because the journals are organized horizontally in terms of disciplines and vertically in terms of hierarchies. This leads to a complex system that may not be unambiguously decomposable. The consequential uncertainty in the decomposition can be detrimental to the evaluation (Rafols et al., 2012).

Is the current discussion laying the ground work for the introduction of a next “crown indicator”? We seem to be caught in a reflexive loop: on the assumption that policy makers and R&D managers ask for reliable indicators, CWTS and other centers need to update versions when too many flaws become visible in the results. In the meantime, the repertoires have been differentiated: one repertoire in the journals covering “advanced scientometrics improving the indicators,” another one in the reports legitimating evaluations based on “state of the art”, and a third one issuing STS-style appeals to principles in evaluation practices (e.g., “the Leiden manifesto”; Hicks et al., 2015).

References
Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics, 5(1), 228-230.

Garfield, E., Pudovkin, A. I., & Istomin, V. S. (2003). Why do we need algorithmic historiography? Journal of the American Society for Information Science and Technology, 54(5), 400-412.

Gingras, Y., & Larivière, V. (2011). There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization. Journal of Informetrics, 5(1), 226-227.

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics. Nature, 520, 429-431.

Leydesdorff, L. (2008). Caveats for the Use of Citation Indicators in Research and Journal Evaluation. Journal of the American Society for Information Science and Technology, 59(2), 278-287.

Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents Journal of the American Society for Information Science and Technology, 62(7), 1370-1381.

Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381-422.

Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423-430.

Pudovkin, A. I., & Garfield, E. (2002). Algorithmic procedure
for finding semantically related journals. Journal of the American Society for Information Science and Technology, 53(13), 1113-1119.

Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282.

Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011a). Towards a new crown indicator: An empirical analysis. Scientometrics, 87, 467–481.

Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011b). Towards a New Crown Indicator: Some Theoretical Considerations. Journal of Informetrics, 5(1), 37-47.

Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E., Tijssen, R. J., Eck, N. J., . . . Wouters, P. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419-2432.


On Tue, May 31, 2016 at 7:54 AM, Waltman, L.R. <waltmanlr at cwts.leidenuniv.nl<mailto:waltmanlr at cwts.leidenuniv.nl>> wrote:
Dear colleagues,

I would like to draw your attention to a special section of Journal of Informetrics on the topic of size-independent indicators in citation analysis. The special section is available at http://www.sciencedirect.com/science/journal/17511577/10/2. It presents a debate about the validity of commonly used scientometric indicators for assessing the scientific performance of research groups, institutions, etc.

An introduction into the debate is provided in the following blog post: https://www.cwts.nl/blog?article=n-q2w274.

Best regards,

Ludo Waltman

Editor-in-Chief
Journal of Informetrics
_______________________________________________
SIGMETRICS mailing list
SIGMETRICS at mail.asis.org<mailto:SIGMETRICS at mail.asis.org>
http://mail.asis.org/mailman/listinfo/sigmetrics



--
Loet Leydesdorff
Professor Emeritus, University of Amsterdam
Amsterdam School of Communications Research (ASCoR)
loet at leydesdorff.net<mailto:loet at leydesdorff.net>;  http://www.leydesdorff.net/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20160531/53d75bfb/attachment-0001.html>


More information about the SIGMETRICS mailing list