Hefce backs off citations in favour of peer review in REF
Stevan Harnad
harnad at ECS.SOTON.AC.UK
Wed Jun 24 04:26:05 EDT 2009
Hefce backs off citations in favour of peer review in REF
18 June 2009
By Zoë Corbyn
Research assessments in hard sciences will now be 'informed' by
bibliometrics. Zoe Corbyn writes
The use of citations to determine the quality of academic work in the
hard sciences is to be abandoned in favour of peer review in the new
system being designed to replace the research assessment exercise.
However, information about the number of citations a scholar's work
accrues could be provided to assessment panels to help "inform" their
judgments in a range of subjects....
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=407041&c=1
Richard Hull 20 June, 2009
So finally common sense prevails. But I would now like to know exactly
which stupid, thoughtless person, blinded by the New Labour mantra of
"evidence-based this that and the other", first proposed the hair-
brained idea to use citations?? Time for some journalistic digging, I
think. This person must be exposed, as they have effectively wasted a
huge amount of the time and energy of HEFCE and indeed the academics
who actively opposed the idea.
Stevan Harnad 22 June, 2009
It's probably alright that instead of scrapping panel rankings
altogether and hard-wiring the outcome to metrics, the new REF will
continuing doing rankings and metrics in parallel, using the metrics
as advisory rather than binding.
That's fine; it will give the metrics a better chance to be cross-
validated against peer judgment (though the hybrid metric-influenced
rankings of the new REF will not be as independent a criterion against
which to validate metrics as the RAE rankings were, when they were not
influenced by metrics).
The important thing is to make the battery of candidate metrics as
broad and rich as possible. It is true that metrics today are still
relatively sparse, but with the growth of open access and a rich
variety of web-based metrics emerging therefrom, the power and scope
of metrics will now grow and grow.
About the possibility of abuse: Yes, one can abuse individual metrics.
Downloads are the easiest to abuse. But genuine downloads generate
genuine citations, and the correlation is there and can be measured.
There are other intercorrelations in multiple metric profiles too.
There are endogamy/exogamy metrics: Self-citations, co-author
citations, author-circle citations, same-institution citations, same-
journal citations. With these, anomalies and abuses can be detected,
named and shamed.
Multiple metrics create a pattern, a profile. If you artificially
manipulate one of them (say, downloads, or citing others in your
institution) it will be detectable as a deviation from the normal
profile. Once a few of these abuses are prominently exposed and
shamed, that will create a strong deterrent against trying such
tricks, since the objective is the exact opposite: to increase one's
prestige, not to tarnish it.
And unlike (some) individual metrics, multiple metric profiles are
almost impossible to manipulate jointly: Try writing software to
generate bogus downloads of your work looking as if they all come from
different IPs the world over, and then try to generate the non-
institutional citations that would normally be the correlate of such
high downloads. Even that 2-metric trick is not easy to accomplish!
Stevan Harnad University of Southampton
REPLY TO RICHARD HULL: ON EXPOSING THE CULPRIT -- Harnad, S. (2001)
Research access, impact and assessment. Times Higher Education
Supplement 1487: p. 16. http://cogprints.org/1683/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20090624/034cc338/attachment.html>
More information about the SIGMETRICS
mailing list