Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance

Loet Leydesdorff loet at LEYDESDORFF.NET
Tue Mar 23 02:41:16 EDT 2010



Normalization, CWTS indicators, and the Leiden Rankings: 
Differences in citation behavior at the level of fields

Authors: Loet
<http://arxiv.org/find/physics/1/au:+Leydesdorff_L/0/1/0/all/0/1>
Leydesdorff, Tobias
<http://arxiv.org/find/physics/1/au:+Opthof_T/0/1/0/all/0/1> Opthof
(Submitted on 21 Mar 2010)

Abstract: Van Raan et al. (2010; arXiv:1003.2113
<http://arxiv.org/abs/1003.2113> ) have proposed a new indicator (MNCS) for
field normalization. Since field normalization is also used in the Leiden
Rankings of universities, we elaborate our critique of journal normalization
in Opthof & Leydesdorff (2010; arXiv:1002.2769
<http://arxiv.org/abs/1002.2769> ) in this rejoinder concerning field
normalization. Fractional citation counting thoroughly solves the issue of
normalization for differences in citation behavior among fields. This
indicator can also be used to obtain a normalized impact factor. 

Subjects: 	Physics and Society (physics.soc-ph)	
Cite as: 	arXiv:1003.3977v1 <http://arxiv.org/abs/1003.3977v1>
[physics.soc-ph]	
 
  _____  

Loet Leydesdorff 
Amsterdam School of Communications Research (ASCoR)
Kloveniersburgwal 48, 1012 CX Amsterdam
 <mailto:loet at leydesdorff.net> loet at leydesdorff.net ;
<http://www.leydesdorff.net/> http://www.leydesdorff.net/ 

 

  _____  

From: Linda Butler [mailto:linda.butler at anu.edu.au] 
Sent: Saturday, March 20, 2010 6:01 AM
To: Loet Leydesdorff; Ludo Waltman
Cc: SIGMETRICS at listserv.utk.edu
Subject: Re: [SIGMETRICS] Caveats for the journal and field normalizations
in the CWTS ("Leiden") evaluations of research performance


Loet, Ludo 

There is another reason why bibliometricians are starting to do calculations
at the individual level, which doesn't seem to have been mentioned in any of
the papers referred to in this discussion.

That reason is to make it easier to re-aggregate publication sets in
different ways.

Let me give a "live" example.  The Australian Research Council (ARC)
trialled a new assessment system late last year.  In physics, chemistry and
earth sciences (the only sciences assessed in the trial - the other
disciplines were all in the humanities and creative arts) bibliometrics
played a key role.  Several indicators were used, of which one was Relative
Citation Impact.  It was calculated by determining the RCI for each
individual paper, then computing an average of RCIs to give an institutional
RCI i.e. the method Loet is championing and the one that Ludo reveals CWTS
is now also working on.

In addition to statistical considerations, there was an additional important
advantage to the ARC in using this methodology.  The Australian Government
was only interested in assessing the performance of fields within
institutions - who had the strongest/weakest geology, astronomy, organic
chemistry, etc.  They were not concerned about which academic units these
publications came from.  But Vice-Chancellors and research managers were!
VERY interested!!  From the very first discussions on the development of the
methodology (late 2007) it became clear that an essential consideration was
the ability of universities to re-aggregate data to groups, faculties,
research themes, or however they wanted to do.  I am not necessarily
supporting what they want to do - but they undoubtedly want to do it.  An
article by article methodology makes this a simple computational task.

So the methodology has already being used in a national research assessment
system, and will be used again when all fields that are subject to
bibliometrics are assessed in the second half of this year.

Can I just add that I'm just a little uncomfortable that CWTS seems to have
been singled out for such pointed criticism.  It's not just a "CWTS"
methodology.  Many other groups have used or are using this method, and not
just because they copied CWTS - my own unit, REPP, is a case in point.  Many
of us arrived at the same point through our own development work, and having
arrived there, are now looking to move on and improve the calculations.

It's an important issue, so let's keep the discussion going, but keep it
collegial.

regards
Linda Butler



On 20/03/2010, at 9:22 AM, Loet Leydesdorff wrote:


Dear colleagues,
 
Please, find below the CWTS indicators for four papers of which two are
cited in the same journal i or the same field i:
 
<Outlook.jpg>

<Outlook.jpg>
In our opinion, one should normalize as follows:
 
<Outlook.jpg>


The issue is more general than CWTS because other centers normalize using
MOCR/MECR, that is: Mean Observed Citation Rates divided by Mean Expected
Citation Rates. The quotient between two means is no longer a statistics,
while the values of observed versus expected citation rates provide a
variance (standard deviation, median, etc.). 
(For example, 3/2 plus 2/3 is very different from 5/5).
 
We understand from the papers indicated that in the meantime CWTS is
changing its procedures.
 
Best wishes, 
 
Loet
________________________________

Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
loet at leydesdorff.net ; http://www.leydesdorff.net/



> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Ludo Waltman
> Sent: Friday, March 19, 2010 10:34 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] Caveats for the journal and field
> normalizations in the CWTS ("Leiden") evaluations of research
> performance
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear colleagues,
>
> Recently, Tobias Opthof and Loet Leydesdorff have written a
> critical paper
> (see below) about the way in which bibliometric research performance
> assessment studies are conducted by the Centre for Science
> and Technology
> Studies (CWTS) of Leiden University. There are a number of important
> inaccuracies in the paper by Opthof and Leydesdorff. CWTS
> also strongly
> disagrees with many of their comments. In the following paper
> CWTS replies to
> the criticism of Opthof and Leydesdorff:
>
> Anthony F.J. van Raan, Thed N. van Leeuwen, Martijn S.
> Visser, Nees Jan van
> Eck, and Ludo Waltman. Rivals for the crown: Reply to Opthof and
> Leydesdorff. Available at http://arxiv.org/abs/1003.2113.
>
> CWTS has also prepared a related paper on the same topic:
>
> Ludo Waltman, Nees Jan van Eck, Thed N. van Leeuwen, Martijn
> S. Visser, and
> Anthony F.J. van Raan. Towards a new crown indicator: Some
> theoretical
> considerations. Available at http://arxiv.org/abs/1003.2167.
>
> Best regards,
>
> Ludo Waltman
>
>
> On 16/02/2010 07:46, Loet Leydesdorff wrote:
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> > Caveats for the journal and field normalizations in the
> CWTS ("Leiden")
> evaluations of research performance
> > Journal of Informetrics (forthcoming).
> > http://arxiv.org/abs/1002.2769
> >
> >     Abstract: The Center for Science and Technology Studies
> at Leiden
> University advocates the use of specific normalizations for
> assessing research
> performance with reference to a world average. The Journal
> Citation Score
> (JCS) and Field Citation Score (FCS) are averaged for the
> research group or
> individual researcher under study, and then these values are used as
> denominators of the (mean) Citations per publication (CPP).
> Thus, this
> normalization is based on dividing two averages. This procedure only
> generates a legitimate indicator in the case of underlying
> normal distributions.
> Given the skewed distributions under study, one should
> average the observed
> versus expected values which are to be divided first for each
> publication. We
> show the effects of the Leiden normalization for a recent
> evaluation where we
> happened to have access to the underlying data.
> >     
> >
> > Tobias Opthof [1,2], Loet Leydesdorff [3]
> > 
> >
> > [1] Experimental Cardiology Group, Heart Failure Research
> Center, Academic
> Medical Center AMC, Meibergdreef 9, 1105 AZ Amsterdam, The
> Netherlands.
> >
> > [2] Department of Medical Physiology, University Medical
> Center Utrecht,
> Utrecht, The Netherlands.
> >
> > [3] Amsterdam School of Communications Research (ASCoR),
> University of
> Amsterdam, Kloveniersburgwal 48, 1012 CX Amsterdam, The Netherlands.
> >
> > 
> >
> > ** apologies for cross-postings
> >
> >
>
>
> ========================================================
> Ludo Waltman MSc
> Researcher
>
> Centre for Science and Technology Studies
> Leiden University
> P.O. Box 905
> 2300 AX Leiden
> The Netherlands
>
> Willem Einthoven Building, Room B5-35
> Tel:      +31 (0)71 527 5806
> Fax:      +31 (0)71 527 3911
> E-mail:   waltmanlr at cwts.leidenuniv.nl
> Homepage: www.ludowaltman.nl
> ========================================================
> 


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Linda Butler
linda.butler52 at gmail.com

landline: +61 (0)2 4982 7994
mobile: 0428 598 482
url: http://members.optuszoo.com.au/linda.butler52
ABN:  83 884 783 826



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20100323/fdfd15c8/attachment.html>


More information about the SIGMETRICS mailing list