Open Access Metrics: Use REF2014 to Validate Metrics for REF2020

Stevan Harnad harnad at ECS.SOTON.AC.UK
Wed Dec 17 10:38:37 EST 2014



> On Dec 17, 2014, at 9:54 AM, Alan Burns <alan.burns at YORK.AC.UK> wrote:
> 
> Those that advocate metrics have never, to at least my satisfaction, answered the
> argument that accuracy in the past does not mean effectiveness in the future,
> once the game has changed.

I recommend Bradley on metaphysics and Hume on induction <http://plato.stanford.edu/entries/induction-problem/>:

"The man who is ready to prove that metaphysical knowledge is wholly impossible… is a brother metaphysician with a rival theory <https://www.goodreads.com/quotes/1369088-the-man-who-is-ready-to-prove-that-metaphysical-knowledge>” Bradley, F. H. (1893) Appearance and Reality

One could have asked the same question about apples continuing to fall down in future, rather than up.

Yes, single metrics can be abused, but not only van abuses be named and shamed when detected, but it become harder to abuse metrics when they are part of a multiple, inter-correlated vector, with disciplinary profiles on their normal interactions: someone dispatching a robot to download his papers would quickly be caught out when the usual correlation between downloads and later citations fails to appear. Add more variables and it gets even harder,

> Even if one was able to define a set of metrics that perfectly matches REF2014.
> The announcement that these metric would be used in REF2020 would
> immediately invalidate there use.

In a weighted vector of multiple metrics like the sample I had listed, it’s no use to a researcher if told that for REF2020 the mertic equation will be the following, with the following weights for their particular discipline:

w1(pubcount) + w2(JIF) + w3(cites) +w4(art-age) + w5(art-growth)  w6(hits) +w7(cite-peak-latency) + w8(hit-peak-latency) +w9(citedecay) +w10(hitdecay) + w11(hub-score) + w12(authority+score) + w13(h-index) + w14(prior-funding) +w15(bookcites) + w16(student-counts) + w17(co-cites + w18(co-hits) + w19(co-authors) + w20(endogamy) + w21(exogamy) + w22(co-text) + w23(tweets) + w24(tags), +w25(comments) + w26(acad-likes) etc. etc.

The potential list could be much longer, and the weights can be positive or negative, and varying by discipline.

"The man who is ready to prove that metric knowledge is wholly impossible… is a brother metrician with rival m <https://www.goodreads.com/quotes/1369088-the-man-who-is-ready-to-prove-that-metaphysical-knowledge>etrics…”


> On 17 Dec 2014, at 14:35, Jon Crowcroft <jon.crowcroft at CL.CAM.AC.UK <mailto:jon.crowcroft at CL.CAM.AC.UK>> wrote:
> 
>> if you wanted to do this properly, you should have to take a lot of outputs that were NOT submitted and run any metric scheme on them as well as those submitted. 
>> too late:)
>> 
>> On Wed, Dec 17, 2014 at 2:26 PM, Stevan Harnad <harnad at ecs.soton.ac.uk <mailto:harnad at ecs.soton.ac.uk>> wrote:
>> Steven Hill of HEFCE has posted “an overview of the work HEFCE are currently commissioning which they are hoping will build a robust evidence base for research assessment” in LSE Impact Blog 12(17) 2014 entitled Time for REFlection: HEFCE look ahead to provide rounded evaluation of the REF <http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/17/time-for-reflection/>
>> 
>> Let me add a suggestion, updated for REF2014, that I have made before (unheeded):
>> 
>> Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Excellence Framework (REF) -- together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, tweets, tags, etc.) can all be tested and validated jointly, discipline by discipline, against their REF panel rankings in REF2014. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster.
>> 
>> Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise <http://eprints.ecs.soton.ac.uk/17142/>. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.  (2007) 
>> 
>> See also:
>> The Only Substitute for Metrics is Better Metrics <http://openaccess.eprints.org/index.php?/archives/1136-The-Only-Substitute-for-Metrics-is-Better-Metrics.html> (2014)
>> and
>> On Metrics and Metaphysics <http://openaccess.eprints.org/index.php?/archives/479-On-Metrics-and-Metaphysics.html> (2008)
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20141217/6129300b/attachment.html>


More information about the SIGMETRICS mailing list