CWTS Leiden Ranking 2015

David Wojick dwojick at CRAIGELLACHIE.US
Fri May 22 17:24:05 EDT 2015


Dear Nees Jan,

One of the list members kindly sent me your paper this morning and I have read it with great interest.
The work is very impressive and your WVE method seems clearly superior to WoS. 

I did not know when I proposed the sensitivity test that WVE scales to different numbers of micro fields, or levels as you call them.
This in itself is impressive.
I think your 8th level is the one with 4000 fields, while your 7th has 2000 (my number) by coincidence. 
However, running the same algorithm or method of clustering at different levels is not the test I was talking about.
What I have in mind is running different clustering algorithms, to see how sensitive the rankings are 
to the assumptions in your algorithm. This has not been done, as far as I can tell. 
There are, after all, many ways to cluster a network.

My best wishes,

David

On May 22, 2015, at 4:13 PM, "Eck, N.J.P. van" <ecknjpvan at CWTS.LEIDENUNIV.NL> wrote:

> Dear colleagues,
> 
>  
> 
> Thank you again for the additional thoughts and suggestions. This is much appreciated!
> 
>  
> 
> I will give a response to a number of the points raised in the discussion:
> 
>  
> 
> 1.       A preprint of the paper by Ruiz-Castillo and Waltman is freely available at http://hdl.handle.net/10016/18385. All other papers that I mentioned in my earlier message are also freely available online, most of them in the arXiv.
> 
>  
> 
> 2.       The test suggested by David is performed in the above-mentioned paper.
> 
>  
> 
> 3.       Loet’s stability analysis shows that the ranking of Dutch universities relative to each changes quite significantly over time, at least when the size-independent PP(top 10%) indicator is considered, so when the focus is on the proportion of the publications of a university belonging to the top 10% most cited publications in their field. Please keep in mind that the Dutch universities almost all have a quite similar performance in terms of the PP(top 10%) indicator, so even small changes in the indicator may lead to a change in the ranking of the universities relative to each other. It might therefore be informative to take a look at the time trend in the PP(top 10%) values themselves instead of the time trend in the ranking derived from the PP(top 10%) values.
> 
>  
> 
> 4.       Loet is right that size-dependent indicators, such as the number (rather than the proportion) of top 10% publications of a university, yield a more stable outcome. I don’t think this should be seen as an argument for using size-dependent instead of size-independent indicators. The two types of indicators answer different questions, so depending on what you are interested in you should use one or the other. If you want to know which Dutch universities contribute most to the top cited publications worldwide, then use the size-dependent P(top 10%) or P(top 1%) indicator. If you want to know which Dutch universities have the largest share of high-impact work within their overall publication output, then use the size-independent PP(top 10%) or PP(top 1%) indicator.
> 
>  
> 
> 5.       It is clear from the different contributions to the discussion that there is a need in the community to have a more detailed insight into the consequences of performing field normalization at the level of 4000 algorithmically constructed fields. At CWTS, we are currently thinking about the best way in which we can offer more detailed information to the community. It is a busy time for us at the moment, with many people responding to our ranking. However, as soon as I have more information, I will inform you about this on the mailing list.
> 
>  
> 
> Best regards,
> 
> Nees Jan
> 
>  
> 
>  
> 
> From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
> Sent: Friday, May 22, 2015 8:53 AM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015
> 
>  
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> PS. Size-dependent is rather stable. This is also normalized at the micro-cluster level!
> 
>  
> 
> <image001.png>
> 
> These results also accord with my intuition.
> 
> The problem must thus be that the denominators fluctuate. Is this correct, Nees Jan?
> 
>  
> 
> Would this be an argument for using P(10%) rather than PP(10%)? Or is this field-specific? (This is SSH).
> 
>  
> 
> Best,
> 
> Loet
> 
>  
> 
> size dependent
> 
> 2006-2009
> 
> 2007-2010
> 
> 2008-2011
> 
> 2009-2012
> 
> 2010-2013
> 
> Delft Univ Technol
> 
> 13
> 
> 12
> 
> 11
> 
> 11
> 
> 12
> 
> Eindhoven Univ Technol
> 
> 11
> 
> 11
> 
> 13
> 
> 13
> 
> 13
> 
> Erasmus Univ Rotterdam
> 
> 2
> 
> 4
> 
> 4
> 
> 4
> 
> 4
> 
> Leiden Univ
> 
> 8
> 
> 9
> 
> 9
> 
> 8
> 
> 8
> 
> Maastricht Univ
> 
> 7
> 
> 7
> 
> 8
> 
> 9
> 
> 9
> 
> Radboud Univ Nijmegen
> 
> 6
> 
> 6
> 
> 6
> 
> 6
> 
> 6
> 
> Tilburg Univ
> 
> 9
> 
> 8
> 
> 7
> 
> 7
> 
> 7
> 
> Univ Amsterdam
> 
> 1
> 
> 1
> 
> 1
> 
> 1
> 
> 1
> 
> Univ Groningen
> 
> 5
> 
> 5
> 
> 5
> 
> 5
> 
> 5
> 
> Univ Twente
> 
> 12
> 
> 13
> 
> 12
> 
> 12
> 
> 10
> 
> Utrecht Univ
> 
> 3
> 
> 3
> 
> 3
> 
> 3
> 
> 2
> 
> VU Univ Amsterdam
> 
> 4
> 
> 2
> 
> 2
> 
> 2
> 
> 3
> 
> Wageningen Univ & Res Ctr
> 
> 10
> 
> 10
> 
> 10
> 
> 10
> 
> 11
> 
>  
> 
> Loet Leydesdorff
> 
> Emeritus University of Amsterdam
> Amsterdam School of Communication Research (ASCoR)
> 
> loet at leydesdorff.net ; http://www.leydesdorff.net/ 
> Honorary Professor, SPRU, University of Sussex;
> 
> Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing;
> 
> Visiting Professor, Birkbeck, University of London;
> 
> http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
> 
>  
> 
> From: Loet Leydesdorff [mailto:loet at leydesdorff.net] 
> Sent: Friday, May 22, 2015 7:10 AM
> To: 'ASIS&T Special Interest Group on Metrics'
> Subject: RE: [SIGMETRICS] CWTS Leiden Ranking 2015
> 
>  
> 
> Dear David and colleagues,
> 
>  
> 
> The time series provides an impression of the stability.
> 
>  
> 
> Since I happen to be most familiar with this system, I chose “Social Science and Humanities” for the 13 Dutch universities,
> 
> PP(top-10%), fractional counting, etc. (default values). Size-independent ranking; that is, normalized over the 4,000 clusters or micro-fields.
> 
>  
> 
> SSH
> 
> 2006-2009
> 
> 2007-2010
> 
> 2008-2011
> 
> 2009-2012
> 
> 2010-2013
> 
> Delft Univ Technol
> 
> 13
> 
> 12
> 
> 13
> 
> 11
> 
> 12
> 
> Eindhoven Univ Technol
> 
> 2
> 
> 3
> 
> 11
> 
> 12
> 
> 13
> 
> Erasmus Univ Rotterdam
> 
> 1
> 
> 2
> 
> 5
> 
> 6
> 
> 8
> 
> Leiden Univ
> 
> 7
> 
> 10
> 
> 7
> 
> 1
> 
> 3
> 
> Maastricht Univ
> 
> 9
> 
> 8
> 
> 6
> 
> 8
> 
> 5
> 
> Radboud Univ Nijmegen
> 
> 10
> 
> 7
> 
> 4
> 
> 5
> 
> 6
> 
> Tilburg Univ
> 
> 11
> 
> 11
> 
> 8
> 
> 7
> 
> 7
> 
> Univ Amsterdam
> 
> 3
> 
> 4
> 
> 2
> 
> 4
> 
> 1
> 
> Univ Groningen
> 
> 8
> 
> 6
> 
> 9
> 
> 10
> 
> 9
> 
> Univ Twente
> 
> 12
> 
> 13
> 
> 12
> 
> 13
> 
> 10
> 
> Utrecht Univ
> 
> 5
> 
> 5
> 
> 3
> 
> 2
> 
> 4
> 
> VU Univ Amsterdam
> 
> 4
> 
> 1
> 
> 1
> 
> 3
> 
> 2
> 
> Wageningen Univ & Res Ctr
> 
> 6
> 
> 9
> 
> 10
> 
> 9
> 
> 11
> 
>  
> 
>  
> 
>  
> 
> <image002.png>
> 
>  
> 
>  
> 
>  
> 
> Correlations
>  
> 
> v06t09
> v07t10
> v08t11
> v09t12
> v10t13
> Kendall's tau_b
> v06t09
> Correlation Coefficient
> 1.000
> .718**
> .359
> .205
> .282
> Sig. (2-tailed)
> .
> .001
> .088
> .329
> .180
> N
> 13
> 13
> 13
> 13
> 13
> v07t10
> Correlation Coefficient
> .718**
> 1.000
> .538*
> .282
> .256
> Sig. (2-tailed)
> .001
> .
> .010
> .180
> .222
> N
> 13
> 13
> 13
> 13
> 13
> v08t11
> Correlation Coefficient
> .359
> .538*
> 1.000
> .692**
> .718**
> Sig. (2-tailed)
> .088
> .010
> .
> .001
> .001
> N
> 13
> 13
> 13
> 13
> 13
> v09t12
> Correlation Coefficient
> .205
> .282
> .692**
> 1.000
> .667**
> Sig. (2-tailed)
> .329
> .180
> .001
> .
> .002
> N
> 13
> 13
> 13
> 13
> 13
> v10t13
> Correlation Coefficient
> .282
> .256
> .718**
> .667**
> 1.000
> Sig. (2-tailed)
> .180
> .222
> .001
> .002
> .
> N
> 13
> 13
> 13
> 13
> 13
> **. Correlation is significant at the 0.01 level (2-tailed).
> *. Correlation is significant at the 0.05 level (2-tailed).
>  
> 
> Thus, the system is very dynamic. J
> 
>  
> 
> As can be expected, the size-dependent ranking with the same parameters is much more stable. (Univ. Amsterdam 1, Utrecht and VU following at the second and third place, with the exception of EUR in the second place in 2006-2009).
> 
>  
> 
> Best,
> 
> Loet
> 
>  
> 
> Loet Leydesdorff
> 
> Emeritus University of Amsterdam
> Amsterdam School of Communication Research (ASCoR)
> 
> loet at leydesdorff.net ; http://www.leydesdorff.net/ 
> Honorary Professor, SPRU, University of Sussex;
> 
> Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing;
> 
> Visiting Professor, Birkbeck, University of London;
> 
> http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
> 
>  
> 
> From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Wojick
> Sent: Friday, May 22, 2015 4:02 AM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015
> 
>  
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> Yes, Christina, there obviously are distant parts of the graph of science that that are not connected to other parts. But we are talking about drawing local lines. That is, saying that some local citation connections are part of a cluster while others are not. 
> 
>  
> 
> My point is merely that there are lots of ways of drawing these local lines. All clusters are in that sense artificial constructs, based on algorithmic assumptions. My question is how sensitive are the university rankings to this particular construct? I have even suggested an alternative construct as a sensitivity test.
> 
>  
> 
> Mind you, I am assuming that the 4000 fields are a construct, not an empirical claim. If it is being claimed that it has been discovered that science actually consists of precisely 4000 micro fields then we have a very different discussion.
> 
>  
> 
> My best wishes,
> 
>  
> 
> David
> 
> Sent from my IPad
> 
> 
> On May 21, 2015, at 7:08 PM, "Pikas, Christina K." <Christina.Pikas at JHUAPL.EDU> wrote:
> 
> “The citation network of science is seamless, or mostly so”
> 
> There are definitely components  - parts of the graph that are not connected to other parts.
> 
>  
> 
> “Thus any segmentation of this seamless network into clusters must require algorithmic assumptions that are more or less arbitrary, in the sense that alternative assumptions are available. “
> 
> There are well-known and accepted community detection techniques and clustering techniques that work on networks where there is only one component. In community detection techniques, you typically maximize modularity, a measure of the extent nodes connect more to each other than to nodes not in the group. There are metrics for the other techniques, too. This is not new science at all. (fwiw, my paper using community detection for science blogs is OA archived here: http://terpconnect.umd.edu/~cpikas/ScienceBlogging/PikasEScience08.pdf - remainder of site is horribly out of date – don’t recommend reading it!)
> 
>  
> 
> The paper discusses how they used 12 (I think?) different levels of clustering before they decided on the one with 4000. I was at a meeting today (hi Chris, hi Nancy) so didn’t see these as they were sent so maybe someone has sent the paper already.
> 
>  
> 
> Christina
> 
>  
> 
> ------
> 
> Christina K. Pikas
> 
> Librarian
> 
> The Johns Hopkins University Applied Physics Laboratory
> 
> Baltimore: 443.778.4812
> 
> D.C.: 240.228.4812
> 
> Christina.Pikas at jhuapl.edu
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of David Wojick
> Sent: Thursday, May 21, 2015 6:56 PM
> To: SIGMETRICS at listserv.utk.edu
> Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015
> 
>  
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> Thank you, Nees Jan, but the paper is behind pay wall. Perhaps you could send me a copy, to dwojick at craigellachie.us.
> 
>  
> 
> In any case my thinking is this. The citation network of science is seamless, or mostly so. That is, there are few,if any, cases where there occurs a cluster such that all the authors therein cite each other and no one else. Thus any segmentation of this seamless network into clusters must require algorithmic assumptions that are more or less arbitrary, in the sense that alternative assumptions are available. 
> 
>  
> 
> One wonders, therefore, to what extent the university rankings are sensitive to the specific assumptions made in order to cluster the citation network and create the fields? Have you tested this sensitivity?
> 
>  
> 
> I suggest a test. Change the algorithm such that it creates 2000 micro fields instead of 4000. Then rank the universities and see what difference it makes. Mind you this is just a crude first thought and better tests may be possible.
> 
>  
> 
> I do not question the value of the work, but as you know using metrics in this way is itself a sensitive issue.
> 
>  
> 
> My best regards,
> 
>  
> 
> David
> 
> 
> On May 21, 2015, at 1:51 AM, "Eck, N.J.P. van" <ecknjpvan at CWTS.LEIDENUNIV.NL> wrote:
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> Dear David,
> 
>  
> 
> The 4000 fields are constructed using a clustering algorithm based on citation relations between publications. A detailed explanation is provided in the following paper: http://dx.doi.org/10.1002/asi.22748.
> 
>  
> 
> The clustering methodology for constructing the fields is fully transparent. The methodology is documented in the above-mentioned paper, and the computer software that is required to implement the methodology is freely available (open source) at www.ludowaltman.nl/slm/. It is true that the results produced by the clustering methodology are not transparent. The assignment of individual publications to the 4000 fields is not visible. As already mentioned, this is something that hopefully can be improved in the future. Please keep in mind that there is a growing consensus among bibliometricians that the use of the Web of Science subject categories for field normalization of bibliometric indicators is unsatisfactory and does not yield sufficiently accurate results. The normalization approach that is taken in the Leiden Ranking offers a more accurate alternative, but indeed the transparency of the Web of Science subject categories is lost.
> 
>  
> 
> Best regards,
> 
> Nees Jan
> 
>  
> 
>  
> 
> From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of David Wojick
> Sent: Wednesday, May 20, 2015 11:23 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015
> 
>  
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> Dear Nees Jan,
> 
> How do you apply 4000 field categories to individual papers? A semantic algorithm? Is this explained on the website? It sounds very difficult.
> 
> Also if the categories are not visible how is the methodology transparent?
> 
> My best wishes,
> 
> David
> http://insidepublicaccess.com/
> 
> At 04:06 PM 5/20/2015, you wrote:
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html 
> Dear Loet,
>  
> Yes, your understanding is correct. MNCS, TNCS, PP(top 10%), P(top 10%), and the other field-normalized impact indicators all use the 4000 fields for the purpose of normalization. The Web of Science subject categories are not used.
>  
> Unfortunately, the 4000 fields are not visible. Because these fields are defined at the level of individual publications rather than at the journal level, there is no easy way to make the fields visible. This is something that hopefully can be improved in the future.
>  
> We have decided to move from 800 to 4000 fields because our analyses indicate that with 800 fields there still is too much heterogeneity in citation density within fields. A detailed analysis of the effect of performing field normalization at different levels of aggregation is reported in the following paper by Javier Ruiz-Castillo and Ludo Waltman: http://dx.doi.org/10.1016/j.joi.2014.11.010. In this paper, it is also shown that at the level of entire universities field-normalized impact indicators are quite insensitive to the choice of an aggregation level.
>  
> Best regards,
> Nees Jan
>  
>  
> From: ASIS&T Special Interest Group on Metrics [ mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
> Sent: Wednesday, May 20, 2015 9:28 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] CWTS Leiden Ranking 2015
>  
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html 
> Dear Nees Jan, 
>  
> As always impressive! Thank you.
>  
> Are the approximately 4,000 fields also visible in one way or another? Do I correctly understand that MNCS is defined in relation to these 4,000 fields and not to the 251 WCs? Is there a concordance table between the fields and WCs as there is between WCs and five broad fields in the Excel sheet? 
>  
> I think that I understand from your and Ludo’s previous publications how the 4,000 fields are generated. Why are there 4,000 such fields in 2015, and 800+ in 2014? Isn’t it amazing that trends can despite the discontinuities be smooth? Or are indicators robust across these scales?
>  
> Best wishes, 
> Loet
>  
> 
>  
> 
> Loet Leydesdorff 
> Emeritus University of Amsterdam
> Amsterdam School of Communication Research (ASCoR)
> loet at leydesdorff.net ; http://www.leydesdorff.net/ 
> Honorary Professor, SPRU, University of Sussex; 
> Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing;
> Visiting Professor, Birkbeck, University of London; 
> http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
>  
> From: ASIS&T Special Interest Group on Metrics [ mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Eck, N.J.P. van
> Sent: Wednesday, May 20, 2015 8:27 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: [SIGMETRICS] CWTS Leiden Ranking 2015
>  
> Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html 
> Release of the CWTS Leiden Ranking 2015
> Today CWTS has released the 2015 edition of the Leiden Ranking. The CWTS Leiden Ranking 2015 offers key insights into the scientific performance of 750 major universities worldwide. A sophisticated set of bibliometric indicators provides statistics on the scientific impact of universities and on universities’ involvement in scientific collaboration. The CWTS Leiden Ranking 2015 is based on Web of Science indexed publications from the period 2010–2013.
>  
> Improvements and new features in the 2015 edition
> Compared with the 2014 edition of the Leiden Ranking, the 2015 edition includes a number of enhancements. First of all, the 2015 edition offers the possibility to perform trend analyses. Bibliometric statistics are available not only for the period 2010–2013 but also for earlier periods. Second, the 2015 edition of the Leiden Ranking provides new impact indicators based on counting publications that belong to the top 1% or top 50% of their field. And third, improvements have been made to the presentation of the ranking. Size-dependent indicators are presented in a more prominent way, and it is possible to obtain a convenient one-page overview of all bibliometric statistics for a particular university.
>  
> Differences with other university rankings
> Compared with other university rankings, the Leiden Ranking offers more advanced indicators of scientific impact and collaboration and uses a more transparent methodology. The Leiden Ranking does not rely on highly subjective data obtained from reputational surveys or on data provided by universities themselves. Also, the Leiden Ranking refrains from aggregating different dimensions of university performance into a single overall indicator.
>  
> Website
> The Leiden Ranking is available at www.leidenranking.com.
>  
>  
> ========================================================
> Nees Jan van Eck PhD
> Researcher
> Head of ICT
>  
> Centre for Science and Technology Studies
> Leiden University
> P.O. Box 905
> 2300 AX Leiden
> The Netherlands
>  
> Willem Einthoven Building, Room B5-35
> Tel: +31 (0)71 527 6445
> Fax: +31 (0)71 527 3911
> E-mail: ecknjpvan at cwts.leidenuniv.nl
> Homepage: www.neesjanvaneck.nl
> VOSviewer: www.vosviewer.com
> CitNetExplorer: www.citnetexplorer.nl
> ========================================================
>  
>  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150522/8809dd70/attachment.html>


More information about the SIGMETRICS mailing list