Open Access Metrics: Use REF2014 to Validate Metrics for REF2020
Stevan Harnad
harnad at ECS.SOTON.AC.UK
Thu Dec 18 10:58:34 EST 2014
Continuous assessment — rather than 6-yearly assessment on 4 outputs — yes.
And, yes, the REF panels did a hard job, admirably.
But you want them to be doing it on all outputs, all the time?
No. The potential of multiple, weighted Open Access metrics is precisely to allow continuous assessment, without the continuous need for peer assessment.
But for this to work the metrics and their weights have to be validated and initialized in first first place: And this is what can be done against the REF2014 rankings.
That done, the initial weights can be updated across the years as needed, based on the growing evidence base and perhaps on the basis of evolving needs and criteria.
Ceterum censeo (adapting Bradley <https://www.goodreads.com/quotes/1369088-the-man-who-is-ready-to-prove-that-metaphysical-knowledge>):
"The man who is ready to prove that metric knowledge is wholly impossible… is a brother metrician with rival metrics…”
> On Dec 18, 2014, at 9:29 AM, Harold Thimbleby <harold at THIMBLEBY.NET> wrote:
>
> It's obviously a big area for discussion/debate/arguent, but the REF and research quality assessment are very different beasts.
>
> The window of REF assessment is very brief compared to the period of research waves - I've just read an EPSRC JeS abstract that anticipates impact in 50 years; Treasury goals are very different to the research community's.
>
> Jon is absolutely right: the REF misses a lot of data, and there was a lot of game playing in it. Anybody with more than 4 publications had to *guess* which would be most use - now ask yourself why a research evaluation process involves game-playing and guessing? An answer is that it did not set out to measure research excellence. (Although it's more complex: people who were measured in the RAE as doing well then will have got more funding so they will tend to do even better...., so the correlations are [more likely] causally related to resourcing not quality)
>
> Not just outputs that didn't get entered into REF - there is a lot of good research that "fails" before it is even eligible for REF measurement - rejected EPSRC proposals for instance: they might represent stunning work but happen to hit referees/panels on a bad day -- their failure is chance rather than a measure of quality. And there is a lot of published CS research that is irreproducible even when it is accepted for publication.
>
> In short, combining REF results, OA and .... to end up with metric predictors might be great fun, but is circular and it misses the point (unless you are the Treasury trying to spend less money).
>
> IMHO the best thing to do would be, now everybody has got computer support for REF assessment, to do it continuously and to stop changing the rules. Unfortunately, the point of changing the rules every assessment cycle is to keep us the researchers playing victims in a bigger game we are not able to control, but in which we (or our VCs) have been conditioned to play willingly. To paraphrase Freud: identification with a power figure is one way the ego defends itself - when a victim accepts the aggressor's values, they cease to be see them as a threat; and when those values are expressed in cash, the identification is so easy.
>
> I'd say the REF panels have done a wonderful job, and I am full of admiration for all involved and their commitment, but I have a deep seated reservation, namely that REF as a "research excellence framework" is and remains a triumph of misdirection.
>
>
>
> Harold
> ---
> Prof. Harold Thimbleby CEng FIET FRCPE FLSW HonFRSA HonFRCP
> See http://harold.thimbleby.net <http://harold.thimbleby.net/> or http://mitpress.com/presson <http://mitpress.com/presson>
> On 17 December 2014 at 14:35, Jon Crowcroft <jon.crowcroft at cl.cam.ac.uk <mailto:jon.crowcroft at cl.cam.ac.uk>> wrote:
> if you wanted to do this properly, you should have to take a lot of outputs that were NOT submitted and run any metric scheme on them as well as those submitted.
> too late:)
>
> On Wed, Dec 17, 2014 at 2:26 PM, Stevan Harnad <harnad at ecs.soton.ac.uk <mailto:harnad at ecs.soton.ac.uk>> wrote:
> Steven Hill of HEFCE has posted “an overview of the work HEFCE are currently commissioning which they are hoping will build a robust evidence base for research assessment” in LSE Impact Blog 12(17) 2014 entitled Time for REFlection: HEFCE look ahead to provide rounded evaluation of the REF <http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/17/time-for-reflection/>
>
> Let me add a suggestion, updated for REF2014, that I have made before (unheeded):
>
> Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Excellence Framework (REF) -- together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, tweets, tags, etc.) can all be tested and validated jointly, discipline by discipline, against their REF panel rankings in REF2014. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster.
>
> Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise <http://eprints.ecs.soton.ac.uk/17142/>. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)
>
> See also:
> The Only Substitute for Metrics is Better Metrics <http://openaccess.eprints.org/index.php?/archives/1136-The-Only-Substitute-for-Metrics-is-Better-Metrics.html> (2014)
> and
> On Metrics and Metaphysics <http://openaccess.eprints.org/index.php?/archives/479-On-Metrics-and-Metaphysics.html> (2008)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20141218/562f2842/attachment.html>
More information about the SIGMETRICS
mailing list