Open Access Metrics: Use REF2014 to Validate Metrics for REF2020
harnad at ECS.SOTON.AC.UK
Fri Dec 19 10:46:39 EST 2014
> On Dec 19, 2014, at 3:07 AM, Loet Leydesdorff <loet at leydesdorff.net> wrote:
> What is being proposed is to validate a metric battery so that if it proves to predict the peer rankings (such as they are, warts and all) sufficiently well, then it can replace (or at least supplement) them.
> The multiple regression analysis is static: you can fine-tune your parameters for REF2014; but in 2020, not only the parameters, but also the latent dimensions of the system will have changed. Thus, your previous estimate will not match (unless the system would be very conservative; quod non).
> Of course, it remain interesting to compare the observed with the expected values, but the difference will not inform you about the validity of your model or the “error” in the peer review (in 2020).
> Nevertheless: please, do the exercise! It may be a bit frustrating to have to wait to 2020 for the observations. L
> Fitting the parameters for REF 2014 may not be sufficiently interesting in itself.
The REF2014 regression analysis has two purposes:
(1) To test how well the 2014 rankings could have been predicted by the joint multiple metric equation
(2) To initialize the weights on the metrics (by discipline).
A validation sample of the size of the UK REF occurs only once every six years, but once the metric weights are initialized against REF2014, the weights can be continuously fine-tuned and optimized on smaller sub-samples.
Yes, most (not all) of the metrics are static in time. Continuous updating of the initial weights along with Path Analysis will help correct this. And of course the chronometrics (latency to peak, longevity, half-life, etc.) are intrinsically more dynamic.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SIGMETRICS