British Classification Soc post-RAE talk/discussion - 6 July (fwd)
Loet Leydesdorff
loet at LEYDESDORFF.NET
Wed Jun 6 14:23:30 EDT 2007
I look forward to your multi-variate regression model for explaining the RAE
rankings.
Best wishes, Loet
________________________________
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
loet at leydesdorff.net ; http://www.leydesdorff.net/
> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad
> Sent: Wednesday, June 06, 2007 7:07 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] British Classification Soc post-RAE
> talk/discussion - 6 July (fwd)
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> On Tue, 5 Jun 2007, Loet Leydesdorff wrote:
>
> >> SH:
> >> "Publications, journal impact factors, citations,
> co-citations, citation
> >> chronometrics (age, growth, latency to peak, decay rate),
> hub/authority
> >> scores, h-index, prior funding, student counts,
> co-authorship scores,
> >> endogamy/exogamy, textual proximity, download/co-downloads
> and their
> >> chronometrics, etc. can all be tested and validated
> jointly, discipline by
> >> discipline, against their RAE panel rankings in the
> forthcoming parallel
> >> panel-based and metric RAE in 2008. The weights of each
> predictor can be
> >> calibrated to maximize the joint correlation with the rankings."
> >
> > Dear Steven,
> >
> > I took this from:
> > Harnad, S. (2007) Open Access Scientometrics and the UK
> Research Assessment
> > Exercise. In Proceedings of 11th Annual Meeting of the
> International Society
> > for Scientometrics and Informetrics (in press), Madrid, Spain; at
> > http://eprints.ecs.soton.ac.uk/13804/
> >
> > It is very clear now: Your aim is to explain the RAE ranking (as the
> > dependent variable). I remain puzzled why one could wish to
> do so. One can
> > expect Type I and Type II errors in these rankings; I would
> expect both of
> > the order of 30% (given the literature). If you would be
> able to reproduce
> > ("calibrate") these rankings using multi-variate
> regression, you would also
> > reproduce the error terms.
>
> Dear Loet,
>
> You are quite right that the RAE panel rankings are themselves merely
> predictive measures, not face-valid criteria, and will hence have
> errors, noise and bias to varying degrees.
>
> But the RAE panel rankings are the only thing the RAE outcome has been
> based on for nearly two decades now! The objective is first to replace
> the expensive and time-consuming panel reviews with metrics that give
> roughly the same rankings. Then we can work on making the metrics even
> more valid and predictive.
>
> First things first: If the panel rankings have been good enough for
> the RAE, then metrics that give the same outcome should be at least
> good enough too. Being far less costly and labor-intensive
> and far more
> transparent, they are vastly to be preferred (with a much
> reduced panel
> role in validity checking and calibration).
>
> Then we can work on optimizing them.
>
> Stevan
>
> PS Of course there are additional ways of validating metrics,
> apart from
> the RAE; moreover, only the UK has the RAE. But that also makes the UK
> an ideal test-bed for prima facie validation of the metrics,
> systematically, across fields and institutions.
>
More information about the SIGMETRICS
mailing list