CWTS Leiden Ranking 2015

Eck, N.J.P. van ecknjpvan at CWTS.LEIDENUNIV.NL
Thu May 21 09:10:36 EDT 2015


Dear colleagues,

Thank you all for your suggestions regarding the field normalization issue. Let me give a response to some of your comments:


1.       Loet's remark on our use of the term 'field': On the Leiden Ranking website, we use the term 'micro-level field' (see www.leidenranking.com/methodology/fields<http://www.leidenranking.com/methodology/fields>), which is perhaps more appropriate than just 'field'.



2.       Loet's remark on the size of the fields in the Leiden Ranking: The fields are indeed quite small, but this is exactly what we want. For instance, consider scientometric research. With how many publications per year do we believe that our own publications as scientometricians can be compared in terms of citation counts? Probably a few hundred and at most about one thousand publications. In the 2014 edition of the Leiden Ranking, there were 800 fields and scientometrics was part of a larger field that also included for instance library science. This leads to questionable comparisons between publications dealing with quite different research topics. In the 2015 edition of the ranking, one of the 4000 fields is focused entirely on research on scientometrics (and closely related topics). This field includes somewhat more than 1000 publications per year in the period 2010-2013 (so it's one of the larger fields among the 4000 fields). We believe that this is approximately the right level of aggregation to perform citation-based comparisons. It could even be argued that a scientometrics field that includes about 1000 publications per year is still a bit large (so in fact we may need to have even more than 4000 fields).



3.       Loet's remark on the validity of year-to-year comparisons: This is a good point. The Leiden Ranking micro-level fields cover the period 2000-2014. The Leiden Ranking 2015 offers a retrospective perspective. The 2015 edition of the ranking provides statistics not only for the period 2010-2013, but also for the periods 2009-2012, 2008-2011, 2007-2010, and 2006-2009. Statistics for all periods have been calculated in a fully consistent way and, importantly, based on the same underlying micro-level fields. So year-to-year comparisons can be made in a proper way.



4.       Loet's remark on the low validity of algorithmically constructed fields: Please note that we construct fields at the level of individual publications, not at the level of entire journals. So the findings of http://doi.org/10.1002/asi.21086, which is a journal-level analysis, don't need to generalize to our publication-level analysis. In our own experience (http://dx.doi.org/10.1002/asi.22748), algorithmically constructed fields at the level of individual publications have a quite high validity.



5.       Loet's remark on science policy implications: Indeed, even if the results are relatively insensitive to methodological choices, still for individual universities there may be significant differences that may have policy implications. This is exactly why in the Leiden Ranking we have moved away from use of the Web of Science journal subject categories for field normalization. Their accuracy for field normalization purposes is limited, as shown in various studies, such as http://dx.doi.org/10.1371/journal.pone.0062395 and http://doi.org/10.1002/asi.23408.



6.       Lutz's remark on using field classifications constructed by experts: This is definitely a sensible approach, but it is not feasible in the context of the Leiden Ranking. This is because the Leiden Ranking covers all scientific disciplines, and many disciplines don't have an expert-based classification. In analyses focusing on a specific discipline (e.g., chemistry), it may indeed be preferable to use an expert-based classification (e.g., Chemical Abstracts sections), although even then it cannot be assumed a priori that an expert-based classification is always more accurate than an algorithmically constructed one. Expert-based classifications do have the advantage of being openly available and therefore being more transparent.



7.       Lutz's remark on comparing the current normalization approach implemented in the Leiden Ranking with an approach based on the Web of Science subject categories: Such a comparison is reported in http://dx.doi.org/10.1016/j.joi.2014.11.010.


Thanks again for everyone's comments and suggestions!

Best regards,
Nees Jan


From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Catharina Rehn
Sent: Thursday, May 21, 2015 10:52 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Dear colleagues,

We have for some years been working with data from both MeSH and the NLM classification system (for journal classes), in addition to the traditional ISI categories, in our analyses. Since our unit is based at a medical university (Karolinska Institutet), our bibliometric system is founded on a combination of data from the Web of Science and Medline/NLM.

Please feel free to contact us if you are interested in our experiences or input to specific research projects.

Best regards,
Catharina Rehn

Catharina Rehn
Karolinska Institutet
171 77 | Box 200
+46 (0)8 524 84054
catharina.rehn at ki.se<mailto:catharina.rehn at ki.se> | ki.se
______________________________________
Karolinska Institutet - a medical university



From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: den 21 maj 2015 10:02
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Dear Lutz, Nees Jan, and colleagues,

Medical Subject Headings (PubMed/Medline) are available in WoS. One could perhaps test the Leiden clustering against the MeSH tree for the bio-medical part of the database.

The three most interesting dimensions of MeSH classifications (C: Diseases; D: Drugs and Chemicals; E: Analytic, Diagnostic, and Therapeutic Techniques and Equipment) are almost orthogonal (Leydesdorff, Rotolo & Rafols, 2012<http://doi.org/10.1002/asi.22715>). Thus, one would obtain three different fits. This would inform us about what is being clustered substantially by the algorithm (Petersen et al<http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2604702>., under submission).

The LoC classification could be another benchmark, but perhaps more difficult to match.

Best,
Loet


________________________________
Loet Leydesdorff
Emeritus University of Amsterdam
Amsterdam School of Communication Research (ASCoR)
loet at leydesdorff.net <mailto:loet at leydesdorff.net> ; http://www.leydesdorff.net/
Honorary Professor, SPRU, <http://www.sussex.ac.uk/spru/> University of Sussex;
Guest Professor Zhejiang Univ.<http://www.zju.edu.cn/english/>, Hangzhou; Visiting Professor, ISTIC, <http://www.istic.ac.cn/Eng/brief_en.html> Beijing;
Visiting Professor, Birkbeck<http://www.bbk.ac.uk/>, University of London;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Bornmann, Lutz
Sent: Thursday, May 21, 2015 9:09 AM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Hi Nees,

Thank you for further explanation of your method! I appreciate the new possibility to take a detailed look at single institutions. Well done!

I followed the publications on your clustering methods. It is an interesting alternative to the journals sets. However, it has several disadvantages, as pointed out by Loet in his previous emails. Loet mentioned another alternative to the journal sets and clustering based on citation relations: field classifications from experts in the field (e.g. sections from Chemical Abstracts, https://www.cas.org/content/ca-sections). These classifications do not change over time for the same publication (as citation relations will do) and the rate of miss-classifications is rather low. We already used the sections for field normalization in several studies, which works well.

I would be delighted if you would publish a Leiden Ranking variant based on the use of WoS journal sets. Then, the user could compare the results (based on journal sets and citation relations) and - another important point - the user could compare own results for an institution with those of the Leiden Ranking. Since your clustering algorithm cannot simply be installed in an in-house solution of the WoS, your Leiden Ranking results can no longer be directly compared with own results (based on WoS journal sets).

Best,

Lutz

From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Eck, N.J.P. van
Sent: Thursday, May 21, 2015 7:51 AM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Dear David,

The 4000 fields are constructed using a clustering algorithm based on citation relations between publications. A detailed explanation is provided in the following paper: http://dx.doi.org/10.1002/asi.22748.

The clustering methodology for constructing the fields is fully transparent. The methodology is documented in the above-mentioned paper, and the computer software that is required to implement the methodology is freely available (open source) at www.ludowaltman.nl/slm/<http://www.ludowaltman.nl/slm/>. It is true that the results produced by the clustering methodology are not transparent. The assignment of individual publications to the 4000 fields is not visible. As already mentioned, this is something that hopefully can be improved in the future. Please keep in mind that there is a growing consensus among bibliometricians that the use of the Web of Science subject categories for field normalization of bibliometric indicators is unsatisfactory and does not yield sufficiently accurate results. The normalization approach that is taken in the Leiden Ranking offers a more accurate alternative, but indeed the transparency of the Web of Science subject categories is lost.

Best regards,
Nees Jan


From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Wojick
Sent: Wednesday, May 20, 2015 11:23 PM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Dear Nees Jan,

How do you apply 4000 field categories to individual papers? A semantic algorithm? Is this explained on the website? It sounds very difficult.

Also if the categories are not visible how is the methodology transparent?

My best wishes,

David
http://insidepublicaccess.com/

At 04:06 PM 5/20/2015, you wrote:
Dear Loet,

Yes, your understanding is correct. MNCS, TNCS, PP(top 10%), P(top 10%), and the other field-normalized impact indicators all use the 4000 fields for the purpose of normalization. The Web of Science subject categories are not used.

Unfortunately, the 4000 fields are not visible. Because these fields are defined at the level of individual publications rather than at the journal level, there is no easy way to make the fields visible. This is something that hopefully can be improved in the future.

We have decided to move from 800 to 4000 fields because our analyses indicate that with 800 fields there still is too much heterogeneity in citation density within fields. A detailed analysis of the effect of performing field normalization at different levels of aggregation is reported in the following paper by Javier Ruiz-Castillo and Ludo Waltman: http://dx.doi.org/10.1016/j.joi.2014.11.010. In this paper, it is also shown that at the level of entire universities field-normalized impact indicators are quite insensitive to the choice of an aggregation level.

Best regards,
Nees Jan


From: ASIS&T Special Interest Group on Metrics [ mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: Wednesday, May 20, 2015 9:28 PM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

Dear Nees Jan,

As always impressive! Thank you.

Are the approximately 4,000 fields also visible in one way or another? Do I correctly understand that MNCS is defined in relation to these 4,000 fields and not to the 251 WCs? Is there a concordance table between the fields and WCs as there is between WCs and five broad fields in the Excel sheet?

I think that I understand from your and Ludo's previous publications how the 4,000 fields are generated. Why are there 4,000 such fields in 2015, and 800+ in 2014? Isn't it amazing that trends can despite the discontinuities be smooth? Or are indicators robust across these scales?

Best wishes,
Loet


Loet Leydesdorff
Emeritus University of Amsterdam
Amsterdam School of Communication Research (ASCoR)
loet at leydesdorff.net <mailto:loet at leydesdorff.net> ; http://www.leydesdorff.net/
Honorary Professor, SPRU, <http://www.sussex.ac.uk/spru/> University of Sussex;
Guest Professor Zhejiang Univ.<http://www.zju.edu.cn/english/>, Hangzhou; Visiting Professor, ISTIC, <http://www.istic.ac.cn/Eng/brief_en.html> Beijing;
Visiting Professor, Birkbeck<http://www.bbk.ac.uk/>, University of London;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

From: ASIS&T Special Interest Group on Metrics [ mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Eck, N.J.P. van
Sent: Wednesday, May 20, 2015 8:27 PM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: [SIGMETRICS] CWTS Leiden Ranking 2015

Release of the CWTS Leiden Ranking 2015
Today CWTS has released the 2015 edition of the Leiden Ranking. The CWTS Leiden Ranking 2015 offers key insights into the scientific performance of 750 major universities worldwide. A sophisticated set of bibliometric indicators provides statistics on the scientific impact of universities and on universities' involvement in scientific collaboration. The CWTS Leiden Ranking 2015 is based on Web of Science indexed publications from the period 2010-2013.

Improvements and new features in the 2015 edition
Compared with the 2014 edition of the Leiden Ranking, the 2015 edition includes a number of enhancements. First of all, the 2015 edition offers the possibility to perform trend analyses. Bibliometric statistics are available not only for the period 2010-2013 but also for earlier periods. Second, the 2015 edition of the Leiden Ranking provides new impact indicators based on counting publications that belong to the top 1% or top 50% of their field. And third, improvements have been made to the presentation of the ranking. Size-dependent indicators are presented in a more prominent way, and it is possible to obtain a convenient one-page overview of all bibliometric statistics for a particular university.

Differences with other university rankings
Compared with other university rankings, the Leiden Ranking offers more advanced indicators of scientific impact and collaboration and uses a more transparent methodology. The Leiden Ranking does not rely on highly subjective data obtained from reputational surveys or on data provided by universities themselves. Also, the Leiden Ranking refrains from aggregating different dimensions of university performance into a single overall indicator.

Website
The Leiden Ranking is available at www.leidenranking.com<http://www.leidenranking.com>.


========================================================
Nees Jan van Eck PhD
Researcher
Head of ICT

Centre for Science and Technology Studies
Leiden University
P.O. Box 905
2300 AX Leiden
The Netherlands

Willem Einthoven Building, Room B5-35
Tel: +31 (0)71 527 6445
Fax: +31 (0)71 527 3911
E-mail: ecknjpvan at cwts.leidenuniv.nl<mailto:ecknjpvan at cwts.leidenuniv.nl>
Homepage: www.neesjanvaneck.nl<http://www.neesjanvaneck.nl/>
VOSviewer: www.vosviewer.com<http://www.vosviewer.com/>
CitNetExplorer: www.citnetexplorer.nl<http://www.citnetexplorer.nl>
========================================================


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150521/1670f9d2/attachment.html>


More information about the SIGMETRICS mailing list