[Sigmetrics] Special section of Journal of Informetrics

Loet Leydesdorff loet at leydesdorff.net
Tue May 31 12:03:34 EDT 2016


Dear Ludo and colleagues,



The Mean Normalized Citation Score (MNCS) was proposed by Waltman *et al*.
(2011a and b) in response to a critique of the previous “crown indicator”
(CPP/FCSm; Moed *et al*., 1995) of the Leiden Center for Science and
Technology Studies (CWTS). The old “crown indiator” had been based on a
mistake against the order of operations prescribing that one should first
multiply and divide and only thereafter add and substract (Opthof &
Leydesdorff, 2010; cf. Gingras & Larivière, 2011). The new “crown
indicator” repaired this problem, but did not sufficiently reflect on two
other problems with these “crown indicators”: (1) the use of the mean and
(2) the problem of field delineation. Field delineation is needed in
evaluation practices because one cannot compare citation scores in
different disciplines.



1.     In a reaction to the above discussion, Bornmann & Mutz (2011)
proposed to use percentile ranks as a non-parametric alternative to using
the means of citation distributions for the normalization. Note that
the *Science
and Engineering Indicators* of the U.S. National Science Board have used
percentile ranks (top-1%, top-10%, etc.) since decades. Citation
distributions are often skewed and the use of the mean can then not be
advised. At the time (2011), we joined forces in a paper entitled “Turning
the Tables of Citation Analysis One More Time: Principles for comparing
sets of documents,” warning, among other things, against the use of
mean-based indicators as proposed by CWTS (Leydesdorff, Bornmann, Mutz, &
Opthof, 2011). Indeed, the Leiden Rankings provide the top-10% as a
category since 2012 (Waltman *et al.*, 2012), but most evaluation practices
are still based on MNCS.



2.     Field delineation is an unresolved problem in evaluative
bibliometrics (Leydesdorff, 2008). Like its predecessor the new “crown
indicator” uses the Web-of-Science Subject Categories (WCs) for “solving”
this problem. However, these categories are notoriously flawed: some of
them overlap more than others and journals have been incrementally
categorized during decades. The system itself is a remnant of the early
days of the *Science Citation Index* with some patchwork (Pudovkin &
Garfield, 2002: 1113n). In other words, the problem is not solved: many
journals are misplaced and WCs can be heterogeneous. Perhaps, the problem
is not clearly solvable because the journals are organized horizontally in
terms of disciplines and vertically in terms of hierarchies. This leads to
a complex system that may not be unambiguously decomposable. The
consequential uncertainty in the decomposition can be detrimental to the
evaluation (Rafols *et al*., 2012).



Is the current discussion laying the ground work for the introduction of a
next “crown indicator”? We seem to be caught in a reflexive loop: on the
assumption that policy makers and R&D managers ask for reliable indicators,
CWTS and other centers need to update versions when too many flaws become
visible in the results. In the meantime, the repertoires have been
differentiated: one repertoire in the journals covering “advanced
scientometrics improving the indicators,” another one in the reports
legitimating evaluations based on “state of the art”, and a third one
issuing STS-style appeals to principles in evaluation practices (e.g., “the
Leiden manifesto”; Hicks *et al*., 2015).



*References*

Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of
measuring citation performance: The avoidance of citation (ratio) averages
in field-normalization. *Journal of Informetrics, 5*(1), 228-230.


Garfield, E., Pudovkin, A. I., & Istomin, V. S. (2003). Why do we need
algorithmic historiography? *Journal of the American Society for
Information Science and Technology, 54*(5), 400-412.


Gingras, Y., & Larivière, V. (2011). There are neither “king” nor “crown”
in scientometrics: Comments on a supposed “alternative” method of
normalization. *Journal of Informetrics, 5*(1), 226-227.


Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015).
The Leiden Manifesto for research metrics. *Nature, 520*, 429-431.


Leydesdorff, L. (2008). *Caveats *for the Use of Citation Indicators in
Research and Journal Evaluation. *Journal of the American Society for
Information Science and Technology, 59*(2), 278-287.


Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the
tables in citation analysis one more time: Principles for comparing sets of
documents *Journal of the American Society for Information Science and
Technology, 62*(7), 1370-1381.


Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric
tools for the assessment of national research performance: Database
description, overview of indicators and first applications. *Scientometrics,
33*(3), 381-422.


Opthof, T., & Leydesdorff, L. (2010). *Caveats* for the journal and field
normalizations in the CWTS (“Leiden”) evaluations of research
performance. *Journal
of Informetrics, 4*(3), 423-430.


Pudovkin, A. I., & Garfield, E. (2002). Algorithmic procedure

for finding semantically related journals. *Journal of the American Society
for Information Science and Technology, 53*(13), 1113-1119.


Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A.
(2012). How journal rankings can suppress interdisciplinary research: A
comparison between innovation studies and business & management. *Research
Policy, 41*(7), 1262-1282.


Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan,
A. F. J. (2011a). Towards a new crown indicator: An empirical
analysis. *Scientometrics,
87*, 467–481.


Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan,
A. F. J. (2011b). Towards a New Crown Indicator: Some Theoretical
Considerations. *Journal of Informetrics, 5*(1), 37-47.


Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E., Tijssen, R. J.,
Eck, N. J., . . . Wouters, P. (2012). The Leiden Ranking 2011/2012: Data
collection, indicators, and interpretation. *Journal of the American
Society for Information Science and Technology, 63*(12), 2419-2432.



On Tue, May 31, 2016 at 7:54 AM, Waltman, L.R. <waltmanlr at cwts.leidenuniv.nl
> wrote:

> Dear colleagues,
>
> I would like to draw your attention to a special section of Journal of
> Informetrics on the topic of size-independent indicators in citation
> analysis. The special section is available at
> http://www.sciencedirect.com/science/journal/17511577/10/2. It presents a
> debate about the validity of commonly used scientometric indicators for
> assessing the scientific performance of research groups, institutions, etc.
>
> An introduction into the debate is provided in the following blog post:
> https://www.cwts.nl/blog?article=n-q2w274.
>
> Best regards,
>
> Ludo Waltman
>
> Editor-in-Chief
> Journal of Informetrics
> _______________________________________________
> SIGMETRICS mailing list
> SIGMETRICS at mail.asis.org
> http://mail.asis.org/mailman/listinfo/sigmetrics
>



-- 
Loet Leydesdorff
Professor Emeritus, University of Amsterdam
Amsterdam School of Communications Research (ASCoR)
loet at leydesdorff.net;  http://www.leydesdorff.net/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20160531/58060572/attachment-0001.html>


More information about the SIGMETRICS mailing list