Citation statistics

Stevan Harnad harnad at ECS.SOTON.AC.UK
Mon Jun 16 11:24:38 EDT 2008


Dear David:

Past RAEs (for 20+ years, about every 6-7 years or so) ranked research  
performance, department by department, for the preceding research  
interval. It did so using a large submitted dossier (not including  
citations or the journal impact factor, which was explicitly  
forbidden), mainly the researchers' 4 best papers each, as evaluated  
by a peer panel, for each discipline. The rankings turned out to be  
highly correlated with total departmental citation counts anyway.

The proposal now is to replace peer rankings with metrics. My proposal  
is to replace it with a battery of metrics, validated against the peer  
rankings, in this last RAE 2008 parallel metric/panel exercise.

Chrs, Stevan

On 16-Jun-08, at 10:32 AM, David E. Wojick wrote:

> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear Stevan,
>
> What is being peer reviewed and ranked in the RAE? Since impact  
> factor measures past performance I presume it is also a rankng of  
> what has happened, defined in some way.
> Cheers,
> David
>
> Steve Harnad writes:
>> I have now read the IMU report too, and agree with Charles that it  
>> makes many valid points but it misunderstands the one fundamental  
>> point concerning the question at hand: Can and should metrics be  
>> used in place of peer-panel based rankings in the UK Research  
>> Assessment Exercise (RAE) and its successors and homologues  
>> elsewhere? And there the answer is a definite Yes.
>
> The IMU critique points out that research metrics in particular and  
> statistics in general are often misused, and this is certainly true.  
> It also points out that metrics are often used without validation.  
> This true is correct. There is also a simplistic tendency to try to  
> use one single metric, rather than multiple metrics that can  
> complement and correct one another. There too, a practical and  
> methodological error is correctly pointed out. It is also true that  
> the "journal impact factor" has many flaws, and should on no account  
> be used to rank individual papers of researchers, and especially not  
> alone, as a single metric.
>
> But what all this valuable, valid cautionary discussion overlooks is  
> not only the possibility but the empirically demonstrated fact that  
> there exist metrics that are highly correlated with human expert  
> rankings. It follows that to the degree that such metrics account  
> for the same variance, they can substitute for the human rankings.  
> The substitution is desirable, because expert rankings are extremely  
> costly in terms of expert time and resources. Moreover, a metric  
> that can be shown to be highly correlated with an already validated  
> variable predictor variable (such as expert rankings) thereby itself  
> becomes a validated predictor variable. And this is why the answer  
> to the basic question of whether the RAE's decision to convert to  
> metrics was a sound one is: Yes.
>
> Nevertheless, the IMU's cautions are welcome: Metrics do need to be  
> validated; they do need to be multiple, rather than a single,  
> unidimensional index; they do have to be separately validated for  
> each discipline, and the weights on the multiple metrics need to be  
> calibrated and adjusted both for the discipline being assessed and  
> for the properties on which it is being ranked. The RAE 2008  
> database provides the ideal opportunity to do all this discipline- 
> specific validation and calibration, because it is providing  
> parallel data from both peer panel rankings and metrics. The  
> metrics, however, should be as rich and diverse as possible, to  
> capitalize on this unique opportunity for joint validation.
> snip

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20080616/b96cf268/attachment.html>


More information about the SIGMETRICS mailing list