Citation statistics

Loet Leydesdorff loet at LEYDESDORFF.NET
Wed Jun 18 05:45:21 EDT 2008


Yes, "granularity of the normalization" is the right concept. What are the
units of analysis? Department can be a mixed bag. If one wishes to predict
the RAE ratings, one has to use them as units of analysis. However, if one
wishes to predict research performance, one better defines research groups
that are driven by developments at (cognitive) research fronts. 

The use of departments as units of analysis is an administrative artifact. 

With best wishes, 


Loet

________________________________

Loet Leydesdorff 
Amsterdam School of Communications Research (ASCoR), 
Kloveniersburgwal 48, 1012 CX Amsterdam. 
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 
loet at leydesdorff.net ; http://www.leydesdorff.net/ 

 

> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics 
> [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams
> Sent: Wednesday, June 18, 2008 11:23 AM
> To: SIGMETRICS at listserv.utk.edu
> Subject: Re: [SIGMETRICS] Citation statistics
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> There is also an analysis of the UK Psychology data in our 
> recent paper
> in Scientometrics  [J Adams, K Gurney and L Jackson. Calibrating the
> zoom - a test of Zitt's hypothesis.  Scientometrics, Vol. 75, No. 1
> (2008) 81-95, DOI: 10.1007/s11192-007-1832-7]
> The paper is about the extent to which correlation (between average
> citation impact and peer-reviewed RAE grade) is affected by the
> granularity of normalisation.  For Psychology, Figure 1 shows 
> the spread
> of impact for individual institutions awarded grade 4, 5 or 5* at
> RAE2001.  Spearman gives a more significant correlation for the
> Psychology data than for Biology or Physics (Table 4), which 
> may suggest
> some particular characteristics (perhaps, greater homogeneity) for the
> Thomson Reuters recorded publication and citation data for Psychology.
> However, although the correlation is 'significant' the 
> residual variance
> evident in Figure 1 would raise an interesting challenge to using the
> impact data as a sole determinant of funding.
> 
> 
> Jonathan Adams
>  
> Director, Evidence Ltd
> + 44 113 384 5680
>  
> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
> Sent: 18 June 2008 07:35
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] Citation statistics
> 
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> Dear Stevan, 
> 
> I now read the article on which you base your argument entitled "The
> correlation between RAE ratings and citation counts in 
> psychology" (Andy
> Smith and Mike Eysenck, Department of Psychology, Royal Holloway,
> University
> of London, June 2002; at http://cogprints.org/2749/1/citations.pdf. 
> 
> "The above correlations are remarkably high, especially when 
> account is
> taken of the fact that
> there is nothing like a perfect overlap between the 
> individuals included
> in
> our analysis of
> citation counts and those included in the two RAEs."
> 
> "Despite very high correlations between citations and RAE 
> grades, there
> are
> a few individual
> departments that depart from the trend. These appear as 'outliers' in
> Fig.
> 2. Herein lies a
> danger."
> 
> "The difference between the correlations for the two RAE 
> years is small
> and
> statistically nonsignificant. However, the fact that citation 
> counts are
> historical, in the sense of referring to research reported some time
> earlier, means that it is reasonable to expect them to 
> predict previous
> RAE
> ratings better than future ratings."
> 
> It made me remember that the chair person of the Nobel Price committee
> once
> told John Irvine that he always checked citation rates for the top
> candidates. Yes, there is a correlation between the 
> indicators. However,
> in
> my opinion, the issue is eventually not to predict future RAE ratings
> --as
> you argue-- but to predict future performance. 
> 
> Best wishes, 
> 
> 
> Loet
> 
> ________________________________
> 
> Loet Leydesdorff 
> Amsterdam School of Communications Research (ASCoR), 
> Kloveniersburgwal 48, 1012 CX Amsterdam. 
> Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 
> loet at leydesdorff.net ; http://www.leydesdorff.net/ 
> 
>  
> 
> > -----Original Message-----
> > From: Loet Leydesdorff [mailto:loet at leydesdorff.net] 
> > Sent: Monday, June 16, 2008 7:40 PM
> > To: 'ASIS&T Special Interest Group on Metrics'
> > Subject: RE: [SIGMETRICS] Citation statistics
> > 
> > Dear Stevan, 
> > 
> > If I correctly understand then your claim is that ranking 
> > results based on peer review at the departmental level 
> > correlate highly with MEAN departmental citation rates. This 
> > would be the case for psychology, wouldn't it? 
> > 
> > It is an amazing result because one does not expect citation 
> > rates to be normally distributed. (The means of the citation 
> > rates, of coures, are normally distributed.) In my own 
> > department, for example (in communication studies), we have 
> > various communities (social-psychologists, information 
> > scientists, political science) with very different citation 
> > patterns. But perhaps, British psychology departments are 
> > exceptionally homongeneous both internally and comparatively. 
> > 
> > Then, you wish to strengthen this correlation by adding more 
> > indicators. The other indicators may correlate better or 
> > worse with the ratings. The former ones can add to the 
> > correlations, while the latter would worsen them. Or do you 
> > wish only to add indicators which improve the correlations 
> > with the ratings? 
> > 
> > I remember from a previous conversation on this subject that 
> > you have a kind of multi-variate regression model in mind in 
> > which the RAE ratings would be the dependent variable. One 
> > can make the model fit to the rankings by estimating the 
> > parameters. One can also refine this per discipline. Would 
> > one expecat any predictive power in such a model in a new 
> > situation (after 4 years)? Why?
> > 
> > With best wishes, 
> > 
> > 
> > Loet
> > 
> > ________________________________
> > 
> > Loet Leydesdorff 
> > Amsterdam School of Communications Research (ASCoR), 
> > Kloveniersburgwal 48, 1012 CX Amsterdam. 
> > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 
> > loet at leydesdorff.net ; http://www.leydesdorff.net/ 
> > 
> >  
> > 
> > > -----Original Message-----
> > > From: ASIS&T Special Interest Group on Metrics 
> > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad
> > > Sent: Monday, June 16, 2008 3:20 PM
> > > To: SIGMETRICS at LISTSERV.UTK.EDU
> > > Subject: Re: [SIGMETRICS] Citation statistics
> > > 
> > > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > > http://web.utk.edu/~gwhitney/sigmetrics.html
> > > 
> > > On Sun, 15 Jun 2008, Loet Leydesdorff wrote:
> > > 
> > > >> SH: But what all this valuable, valid cautionary 
> > > discussion overlooks is not
> > > >> only the possibility but the *empirically demonstrated 
> > > fact* that there
> > > >> exist metrics that are highly correlated with human expert 
> > > rankings.
> > > >
> > > > It seems to me that it is difficult to generalize from one 
> > > setting in which
> > > > human experts and certain ranks coincided to the 
> > *existence *of such
> > > > correlations across the board. Much may depend on how the 
> > > experts are
> > > > selected. I did some research in which referee reports did 
> > > not correlate
> > > > with citation and publication measures.
> > > 
> > > Much may depend on how the experts are selected, but that 
> > was just as
> > > true during the 20 years in which rankings by experts 
> were the sole
> > > criterion for the rankings in the UR Research Assessment Exercise
> > > (RAE). (In validating predictive metrics one must not 
> endeavor to be
> > > Holier than the Pope: Your predictor can at best hope to be 
> > > as good as,
> > > but not better than, your criterion.)
> > > 
> > > That said: All correlations to date between total 
> > departmental author
> > > citation counts (not journal impact factors!) and RAE 
> peer rankings
> > > have been positive, sizable, and statistically significant for the
> > > RAE, in all disciplines and all years tested. Variance 
> > there will be,
> > > always, but a good-sized component from citations alone 
> seems to be
> > > well-established. Please see the studies of Professor 
> Oppenheim and
> > > others, for example as cited in:
> > > 
> > >     Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) 
> > > Mandated online
> > >     RAE CVs Linked to University Eprint Archives: Improving 
> > > the UK Research
> > >     Assessment Exercise whilst making it cheaper and easier. 
> > > Ariadne 35.
> > >     http://www.ariadne.ac.uk/issue35/harnad/
> > > 
> > > > Human experts are necessarily selected from a population of 
> > > experts, and it
> > > > is often difficult to delineate between fields of expertise.
> > > 
> > > Correct. And the RAE rankings are done separately, discipline by
> > > discipline; the validation of the metrics should be done 
> > that way too.
> > > 
> > > Perhaps there is sometimes a case for separate rankings even at
> > > sub-disciplinary level. I expect the departments will be 
> > able to sort
> > > that out. (And note that the RAE correlations do not constitute a
> > > validation of metrics for evaluating individuals: I am 
> > confident that
> > > that too will be possible, but it will require many more 
> metrics and
> > > much more validation.)
> > > 
> > > > Similarly, we
> > > > know from quite some research that citation and publication 
> > > practices are
> > > > field-specific and that fields are not so easy to 
> > > delineate. Results may be
> > > > very sensitive to choices made, for example, in terms of 
> > > citation windows.
> > > 
> > > As noted, some of the variance in peer judgments will 
> depend on the
> > > sample of peers chosen; that is unavoidable. That is also 
> why "light
> > > touch" peer re-validation, spot-checks, updates and 
> > > optimizations on the
> > > initialized metric weights are also a good idea, across the years.
> > > 
> > > As to the need to evaluate sub-disciplines independently: 
> > > that question
> > > exceeds the scope of metrics and metric validation.
> > > 
> > > > Thus, I am bit doubtful about your claims of an 
> > > "empirically demonstrated
> > > > fact."
> > > 
> > > Within the scope mentioned -- the RAE peer rankings, for 
> disciplines
> > > such as they have been partitioned for the past two decades 
> > > -- there is
> > > ample grounds for confidence in the empirical results to date.
> > > 
> > > (And please note that this has nothing to do with journal 
> > > impact factors,
> > > journal field classification, or journal rankings. It is 
> > about the RAE
> > > and the ranking of university departments by peer panels, as 
> > > correlated
> > > with citation counts.)
> > > 
> > > Stevan Harnad
> > > AMERICAN SCIENTIST OPEN ACCESS FORUM:
> > > http://amsci-forum.amsci.org/archives/American-Scientist-Open-
> > > Access-Forum.html
> > >      http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
> > > 
> > > UNIVERSITIES and RESEARCH FUNDERS:
> > > If you have adopted or plan to adopt a policy of providing 
> > Open Access
> > > to your own research article output, please describe your 
> policy at:
> > >      http://www.eprints.org/signup/sign.php
> > >      
> http://openaccess.eprints.org/index.php?/archives/71-guid.html
> > >      
> http://openaccess.eprints.org/index.php?/archives/136-guid.html
> > > 
> > > OPEN-ACCESS-PROVISION POLICY:
> > >      BOAI-1 ("Green"): Publish your article in a suitable 
> > > toll-access journal
> > >      http://romeo.eprints.org/
> > > OR
> > >      BOAI-2 ("Gold"): Publish your article in an open-access 
> > > journal if/when
> > >      a suitable one exists.
> > >      http://www.doaj.org/
> > > AND
> > >      in BOTH cases self-archive a supplementary version of 
> > > your article
> > >      in your own institutional repository.
> > >      http://www.eprints.org/self-faq/
> > >      http://archives.eprints.org/
> > >      http://openaccess.eprints.org/
> > > 
> > 
> 



More information about the SIGMETRICS mailing list