Peer review & impact factor correlations, was Citation statistics

dwojick@hughes.net dwojick at HUGHES.NET
Mon Jun 16 14:08:29 EDT 2008



Dear Stevan,

The correlation here may be due to the fact that both methods (peer review and citation statistics) are measuring popularity, or how well one is known. I am studying the logic of citation. Most journal citations are not to direct predecessors, they are used to lay out the context of the research, so the leading figures are typically cited. (This is even likely to create bandwagon effects.) So it may well be the case that citation statistics are as good as peer review, and a lot more efficient.

David Wojick

----Original Message----
From: harnad at ECS.SOTON.AC.UK
Date: 06/16/2008 11:24 AM
To: 
Subj: Re: [SIGMETRICS] Citation statistics


Past RAEs (for 20+ years, about every 6-7 years or so) ranked research performance, department by department, for the preceding research interval. It did so using a large submitted dossier (not including citations or the journal impact factor, which was explicitly forbidden), mainly the researchers' 4 best papers each, as evaluated by a peer panel, for each discipline. The rankings turned out to be highly correlated with total departmental citation counts anyway.


The proposal now is to replace peer rankings with metrics. My proposal is to replace it with a battery of metrics, validated against the peer rankings, in this last RAE 2008 parallel metric/panel exercise.


Chrs, Stevan



On 16-Jun-08, at 10:32 AM, David E. Wojick wrote:


Dear Stevan,

What is being peer reviewed and ranked in the RAE? Since impact factor measures past performance I presume it is also a rankng of what has happened, defined in some way.
Cheers,
David

Steve Harnad writes:

I have now read the IMU report too, and agree with Charles that it makes many valid points but it misunderstands the one fundamental point concerning the question at hand: Can and should metrics be used in place of peer-panel based rankings in the UK Research Assessment Exercise (RAE) and its successors and homologues elsewhere? And there the answer is a definite Yes.

The IMU critique points out that research metrics in particular and statistics in general are often misused, and this is certainly true. It also points out that metrics are often used without validation. This true is correct. There is also a simplistic tendency to try to use one single metric, rather than multiple metrics that can complement and correct one another. There too, a practical and methodological error is correctly pointed out. It is also true that the "journal impact factor" has many flaws, and should on no account be used to rank individual papers of researchers, and especially not alone, as a single metric.

But what all this valuable, valid cautionary discussion overlooks is not only the possibility but the empirically demonstrated fact that there exist metrics that are highly correlated with human expert rankings. It follows that to the degree that such metrics account for the same variance, they can substitute for the human rankings. The substitution is desirable, because expert rankings are extremely costly in terms of expert time and resources. Moreover, a metric that can be shown to be highly correlated with an already validated variable predictor variable (such as expert rankings) thereby itself becomes a validated predictor variable. And this is why the answer to the basic question of whether the RAE's decision to convert to metrics was a sound one is: Yes.

Nevertheless, the IMU's cautions are welcome: Metrics do need to be validated; they do need to be multiple, rather than a single, unidimensional index; they do have to be separately validated for each discipline, and the weights on the multiple metrics need to be calibrated and adjusted both for the discipline being assessed and for the properties on which it is being ranked. The RAE 2008 database provides the ideal opportunity to do all this discipline-specific validation and calibration, because it is providing parallel data from both peer panel rankings and metrics. The metrics, however, should be as rich and diverse as possible, to capitalize on this unique opportunity for joint validation.
snip



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080616/491d453b/attachment.html>


More information about the SIGMETRICS mailing list