New Letter to the Editor

Bornmann, Lutz lutz.bornmann at GV.MPG.DE
Mon Mar 30 08:27:46 EDT 2015


Hi Loet,

I agree that we have very good alternatives for the MNCS and the use of WoS categories. However, the alternatives have their own (mostly practical) weaknesses. Furthermore, it seems that the different normalization methods produce similar results (see http://www.sciencedirect.com/science/article/pii/S1751157715000073).

Perhaps, other people on this list can report which normalization method they use (as standard). In my opinion, it would be interesting to know this.

Best,

Lutz

From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: Monday, March 30, 2015 9:17 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] New Letter to the Editor

Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
PS.

Both discussions – the one about using the mean (MNCS) and the one about using WoS Subject Categories for the normalization – seem now to have stagnated.


1.       Instead of the mean, one should use percentile rank classes. This was a step in a line of thought in 2010-2011 in which we first criticized the “old” crown indicator and then proposed what later became labeled by CWTS as MNCS (Opthof & Leydesdorff, 2010; cf. Lundberg, 2007; Waltman et al., 2011). We subsequently moved to percentiles, and automated the “Integrated Impact Indicator” that enables users to define one’s percentile rank classes at http://www.leydesdorff.net/software/i3 (Leydesdorff & Bornmann, 2011a).



Another line of thought was source-normalization or fractional counting of the citations (Zitt & Small, 2008; Moed, 2010; Leydesdorff & Bornmann, 2011b). This was elaborated into the SNIP and then into SNIP2. I mentioned Mingers (2014) because this development seems to have got stuck now; the critique does no longer matter?) SJR-2 (Guerrero-Bote et al., 2012), of course, provides an alternative, but nobody can use this indicator outside the institute that constructed it.



In my opinion, I3 and source-normalization (fractional counting) of the citations are still good ideas if one does not have WoS in-house through a license. Perhaps, this is an argument for what you call “amateur-bibliometrics”. It is better than taking the mean.



2.       In principle, SNIP and fractional counting creatively solve the determination of reference sets. The issue is not “normalization” per se, but the specification of an expectation (to be used in the denominator). The institutionalization in Scopus, however, may have been premature; or is there room to move to SNIP-3, and so forth? (Waltman et al., 2013). SNIP may be too technical to be reproduced (or controlled) outside the context of its production.



The determination of reference sets in terms of journals may not work or not be possible (Rafols & Leydesdorff, 2009). The sets are fuzzy and remain changing. CWTS now moved in the Leiden Rankings 2014 to direct clustering of the citations, but the 800+ fields can no longer be validated (Ruiz-Castillo & Waltman, 2015). A disadvantage is that nobody can reproduce the results outside the institute which constructed these “fields”. We know that algorithmic constructs do not necessarily match with intellectual classifications. Furthermore, because the delineation is paper-based (instead of journal-based), one would have to update continuously. Thus, the “fields” cannot be reproduced at a next moment of time.



If one is not able to specify an expectation, it may be better advised not to do so nevertheless. Particularly, the specification of uncertain (or erroneous) expectations in research evaluations may have detrimental effects (e.g., Rafols et al., 2012).



We know this also from the discussion about using impact factors for the assessment of individual papers or institutional units across fields. One easily generates error without the possibility to specify the uncertainty because the error is not only in the measurement (methodological), but also in the conceptualization (theoretical).

Best,
Loet

References:
Guerrero-Bote, V. P., & Moya-Anegón, F. (2012). A further step forward in measuring journals’ scientific prestige: The SJR2 indicator. Journal of Informetrics, 6(4), 674-688.
Leydesdorff, L., & Bornmann, L. (2011a). Integrated Impact Indicators (I3) compared with Impact Factors (IFs): An alternative design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133-2146. doi: 10.1002/asi.21609.
Leydesdorff, L., & Bornmann, L. (2011b). How fractional counting affects the Impact Factor: Normalization in terms of differences in citation potentials among fields of science. Journal of the American Society for Information Science and Technology, 62(2), 217-229.
Lundberg, J. (2007). Lifting the crown—citation z-score. Journal of informetrics, 1(2), 145-154.
Mingers, J. (2014). Problems with SNIP. Journal of Informetrics, 8(4), 890-894.
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265-277.
Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423-430.
Rafols, I., & Leydesdorff, L. (2009). Content-based and Algorithmic Classifications of Journals: Perspectives on the Dynamics of Scientific Communication and Indexer Effects. Journal of the American Society for Information Science and Technology, 60(9), 1823-1835.
Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282.
Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102-117.
Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011). Towards a New Crown Indicator: Some Theoretical Considerations. Journal of Informetrics, 5(1), 37-47.
Waltman, L., van Eck, N. J., van Leeuwen, T. N., & Visser, M. S. (2013). Some modifications to the SNIP journal impact indicator. Journal of Informetrics, 7(2), 272-285.
Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856-1860.



________________________________
Loet Leydesdorff
Emeritus University of Amsterdam
Amsterdam School of Communications Research (ASCoR)
loet at leydesdorff.net <mailto:loet at leydesdorff.net> ; http://www.leydesdorff.net/
Honorary Professor, SPRU, <http://www.sussex.ac.uk/spru/> University of Sussex;
Guest Professor Zhejiang Univ.<http://www.zju.edu.cn/english/>, Hangzhou; Visiting Professor, ISTIC, <http://www.istic.ac.cn/Eng/brief_en.html> Beijing;
Visiting Professor, Birkbeck<http://www.bbk.ac.uk/>, University of London;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

From: Loet Leydesdorff [mailto:loet at leydesdorff.net]
Sent: Sunday, March 29, 2015 8:27 PM
To: 'ASIS&T Special Interest Group on Metrics'
Subject: RE: [SIGMETRICS] New Letter to the Editor

In my opinion, the standard indicator in a field is defined by its frequency of professional use (and not by advantages and disadvantages of relevant indicators). In other words, if professional bibliometricians (and not amateur-bibliometricians) mostly use the MNCS (based on WoS subject categories), this is the standard then.

Perhaps, this is an argument for “amateur-bibliometrics” ☺ because the suggestion of normalization in professional bibliometrics is—as you claim—most of the time erroneous (e.g., Mingers, 2014).

Best,
Loet


Reference:
Mingers, J. (2014). Problems with SNIP. Journal of Informetrics, 8(4), 890-894.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150330/f28928b3/attachment.html>


More information about the SIGMETRICS mailing list