CWTS Leiden Ranking 2015

Loet Leydesdorff loet at LEYDESDORFF.NET
Thu May 21 02:25:37 EDT 2015


Dear Nees Jan,

 

Yes, I know the paper of Javier Ruiz-Castillo and Ludo Waltman in JoI
(http://dx.doi.org/10.1016/j.joi.2014.11.010). Let me provide the
highlights: 

 

. We study algorithmically constructed classification systems for field
normalization purposes.

. It is argued that working with a few thousand fields may be an optimal
choice.

. University citation indicators are relatively insensitive to the
classification system choice.

. For individual universities, this choice may have a substantial effect.

 

My points/questions:

 

1.      The algorithmic definition of a "field": 

 

Let's assume that the number of papers (articles + reviews) is of the order
of 10^6 each year. 4000 clusters are then of an average size of 250. Are
these fields? Perhaps specialties? An individual paper can often be
classified in more than a single specialty even more than in different
disciplines. Note that with 800+ clusters (Rankings 2014), we have clusters
of a very different size (~ 1250).

 

2.      How can one validate 4000 clusters? Perhaps, one can enumerate them.
Any decomposition (clustering) algorithm leads, among other things,  to a
tail. Decomposition algorithms are not deterministic; but of course you may
run them hundred or more times in order to get rid of this effect.
Nevertheless, the distribution is probably heavily skewed and you may have
many small clusters? 

 

I tried once to run the Blondel et al. (2008) algorithm using different
years of JCR data (in collaboration with Renault Lambiotte), but the random
effect was prohibitive on making comparisons among years. We could not solve
it at the time; it is not obvious that you have solved it. Or did you
develop a deterministic algorithm which allows you to make comparisons over
time? You still will have to smoothen the curves in that case (taking
decisions by using parameters).

 

Without solving this problem, one would not be able to say that the rank of
a university improved or worsened between rankings for different years? Are
comparisons between years yet legitimate and if so why?

 

3.      In a study of content-based versus algorithmic classifications of
journals (!), Ismael and I found a dismatch (Rafols & Leydesdorff, 2009;
http://doi.org/10.1002/asi.21086 ). Thus, the expectation of the validity of
the clusters is low, in my opinion. I assume that you are testing this. Are
you planning to fine-tune parameter choices so that 4000 "valid" clusters
are obtained?

 

4.      The argument that indicators are robust across scales may hold at
the aggregated level (for example, r > 0.9), but 10% mistakes in the ranks
is not trivial for universities and the science policy implications. For
example, among the 13 Dutch universities, one of them would then probably be
misclassified. J

 

5.      "Construct-normalized" indicators instead of "field-normalized"? In
terms of the citation distribution, the normalization would remain valid,
but the metaphor of "field-normalized" may have to be left behind.

 

Best,

Loet

 

  _____  

Loet Leydesdorff 

Emeritus University of Amsterdam
Amsterdam School of Communication Research (ASCoR)

 <mailto:loet at leydesdorff.net> loet at leydesdorff.net ;
<http://www.leydesdorff.net/> http://www.leydesdorff.net/ 
Honorary Professor,  <http://www.sussex.ac.uk/spru/> SPRU, University of
Sussex; 

Guest Professor  <http://www.zju.edu.cn/english/> Zhejiang Univ., Hangzhou;
Visiting Professor,  <http://www.istic.ac.cn/Eng/brief_en.html> ISTIC,
Beijing;

Visiting Professor,  <http://www.bbk.ac.uk/> Birkbeck, University of London;


 <http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en>
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

 

From: Eck, N.J.P. van [mailto:ecknjpvan at cwts.leidenuniv.nl] 
Sent: Wednesday, May 20, 2015 10:07 PM
To: 'loet at leydesdorff.net'; SIGMETRICS at LISTSERV.UTK.EDU
Subject: RE: [SIGMETRICS] CWTS Leiden Ranking 2015

 

Dear Loet,

 

Yes, your understanding is correct. MNCS, TNCS, PP(top 10%), P(top 10%), and
the other field-normalized impact indicators all use the 4000 fields for the
purpose of normalization. The Web of Science subject categories are not
used.

 

Unfortunately, the 4000 fields are not visible. Because these fields are
defined at the level of individual publications rather than at the journal
level, there is no easy way to make the fields visible. This is something
that hopefully can be improved in the future.

 

We have decided to move from 800 to 4000 fields because our analyses
indicate that with 800 fields there still is too much heterogeneity in
citation density within fields. A detailed analysis of the effect of
performing field normalization at different levels of aggregation is
reported in the following paper by Javier Ruiz-Castillo and Ludo Waltman:
http://dx.doi.org/10.1016/j.joi.2014.11.010. In this paper, it is also shown
that at the level of entire universities field-normalized impact indicators
are quite insensitive to the choice of an aggregation level.

 

Best regards,

Nees Jan

 

 

From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: Wednesday, May 20, 2015 9:28 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015

 


Dear Nees Jan, 

 

As always impressive! Thank you.

 

Are the approximately 4,000 fields also visible in one way or another? Do I
correctly understand that MNCS is defined in relation to these 4,000 fields
and not to the 251 WCs? Is there a concordance table between the fields and
WCs as there is between WCs and five broad fields in the Excel sheet? 

 

I think that I understand from your and Ludo's previous publications how the
4,000 fields are generated. Why are there 4,000 such fields in 2015, and
800+ in 2014? Isn't it amazing that trends can despite the discontinuities
be smooth? Or are indicators robust across these scales?

 

Best wishes, 

Loet

 

  _____  

Loet Leydesdorff 

Emeritus University of Amsterdam
Amsterdam School of Communication Research (ASCoR)

 <mailto:loet at leydesdorff.net> loet at leydesdorff.net ;
<http://www.leydesdorff.net/> http://www.leydesdorff.net/ 
Honorary Professor,  <http://www.sussex.ac.uk/spru/> SPRU, University of
Sussex; 

Guest Professor  <http://www.zju.edu.cn/english/> Zhejiang Univ., Hangzhou;
Visiting Professor,  <http://www.istic.ac.cn/Eng/brief_en.html> ISTIC,
Beijing;

Visiting Professor,  <http://www.bbk.ac.uk/> Birkbeck, University of London;


 <http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en>
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

 

From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Eck, N.J.P. van
Sent: Wednesday, May 20, 2015 8:27 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: [SIGMETRICS] CWTS Leiden Ranking 2015

 


Release of the CWTS Leiden Ranking 2015

Today CWTS has released the 2015 edition of the Leiden Ranking. The CWTS
Leiden Ranking 2015 offers key insights into the scientific performance of
750 major universities worldwide. A sophisticated set of bibliometric
indicators provides statistics on the scientific impact of universities and
on universities' involvement in scientific collaboration. The CWTS Leiden
Ranking 2015 is based on Web of Science indexed publications from the period
2010-2013.

 

Improvements and new features in the 2015 edition

Compared with the 2014 edition of the Leiden Ranking, the 2015 edition
includes a number of enhancements. First of all, the 2015 edition offers the
possibility to perform trend analyses. Bibliometric statistics are available
not only for the period 2010-2013 but also for earlier periods. Second, the
2015 edition of the Leiden Ranking provides new impact indicators based on
counting publications that belong to the top 1% or top 50% of their field.
And third, improvements have been made to the presentation of the ranking.
Size-dependent indicators are presented in a more prominent way, and it is
possible to obtain a convenient one-page overview of all bibliometric
statistics for a particular university.

 

Differences with other university rankings

Compared with other university rankings, the Leiden Ranking offers more
advanced indicators of scientific impact and collaboration and uses a more
transparent methodology. The Leiden Ranking does not rely on highly
subjective data obtained from reputational surveys or on data provided by
universities themselves. Also, the Leiden Ranking refrains from aggregating
different dimensions of university performance into a single overall
indicator.

 

Website

The Leiden Ranking is available at www.leidenranking.com.

 

 

========================================================

Nees Jan van Eck PhD

Researcher

Head of ICT

 

Centre for Science and Technology Studies

Leiden University

P.O. Box 905

2300 AX Leiden

The Netherlands

 

Willem Einthoven Building, Room B5-35

Tel: +31 (0)71 527 6445

Fax: +31 (0)71 527 3911

E-mail:  <mailto:ecknjpvan at cwts.leidenuniv.nl> ecknjpvan at cwts.leidenuniv.nl

Homepage:  <http://www.neesjanvaneck.nl/> www.neesjanvaneck.nl

VOSviewer:  <http://www.vosviewer.com/> www.vosviewer.com

CitNetExplorer: www.citnetexplorer.nl

========================================================

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150521/96183a3c/attachment.html>


More information about the SIGMETRICS mailing list