UK Research Assessment Exercise (RAE) review
Stevan Harnad
harnad at ECS.SOTON.AC.UK
Mon Nov 25 07:49:34 EST 2002
On Mon, 25 Nov 2002, Jan Velterop wrote:
> If one assesses an institute's productivity by the papers from its
> researchers, and one rates those papers with the help of journal
> impact factors, is it not the case that one should expect the results
> to be in line with the citation counts for those papers? Is it me
> or is there a circular argument here?
You're quite right, Jan, and that was precisely the point of my
recommendation that the RAE should be transformed into continuous
online submission and assessment of online CVs linked to the
online full-texts of each researcher's peer-reviewed research
articles, self-archived in their university's Eprint Archive:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2373.html
Because most of the variance in the RAE rankings is determined by citation
impact already anyway!
Hence this simple, simplifying transformation would make the RAE cheaper,
faster, easier, far less time-wasting for both researchers and assessors,
and more accurate (by adding richer online measures of impact, e.g.,
direct paper/author impact instead of indirect journal impact, plus
many other new online scientometric measures such as online usage
["hits"], time-series analyses, co-citation analyses and full-text-based
semantic co-analyses, all placed in a weighted multiple regression
equation instead of just a univariate correlation).
Plus, as a bonus, this RAE change, in exchange for making it cheaper,
faster, easier, far less time-wasting for both researchers and assessors,
and more accurate, would also help hasten open access -- in the UK as
well as world-wide.
The sequence was:
(i) I conjectured that the RAE might as well go ahead and downsize and
streamline itself in this way, dropping all the needless extra baggage
of the on-paper returns, because the outcome is already determined mostly
by impact ranking anyway:
"(5) If someone did a statistical correlation on the numerical outcome
of the RAE, using the weighted impact factors of the publications of
each department and institution, they would be able to predict the
outcome ratings quite closely. (No one has done this exact statistic,
because the data are implicit rather than explicit in the returns,
but it could be done, and it would be a good idea to do it, just
to get a clear indication of where the RAE stands right now, before
the simple reforms I am recommending.)"
(ii) Then commentators started to respond, including Charles Oppenheim,
gently pointing out to me that I am under-informed, and there is no need
for me to speculate about this, because the post-hoc analyses HAVE been
done, and there is indeed a strong positive correlation between citation
impact and RAE outcome!
(iii) Peter Suber (and others) cited further confirmatory studies.
(iv) So there is nothing circular here. The point was not to
RECOMMEND using citation impact, by circularly demonstrating that
citation impact was being used already.
(v) The point was to downsize, streamline and at the same time strengthen
the RAE by making its (existing) dependence on impact ranking more direct
and explicit and efficient,
(vi) and at the same time enriching its battery of potential impact
measures scientometrically, increasing its predictive power
(vii) while saving time and money
(viii) and leading the planet toward the long overdue objective
of open access to all of its peer-reviewed research output.
(The only recompense I ask for all this ritual repetition and recasting
and clarification I have to keep doing at every juncture is that the
the day should come, and soon!)
[I am braced for the predictable next round of attacks on scientometric
impact analysis: "Citation impact is crude, misleading, circular,
biassed: we must assess research a better way!" And ready to welcome
these critics (as I do the would-be reformers of peer review) to go
ahead and do research on alternative, nonscientometric ways of assessing
and ranking large bodies of research output, and to let us all know what
they are -- once they have found them, tested them and shown them to
predict at least as well as scientometric impact analysis. But in the
meanwhile, I will invite these critics (as I do the would-be reformers
of peer review) to allow these substantial optimizations of the existing
system to proceed apace, rather than holding them back for better (but
untested, indeed unknown) alternatives. For in arguing against these
optimizations of the existing system, they are not supporting a better
way: they are merely arguing for doing what we are doing already anyway,
in a much more wasteful way.]
Amen,
Stevan Harnad
NOTE: A complete archive of the ongoing discussion of providing open
access to the peer-reviewed research literature online is available at
the American Scientist September Forum (98 & 99 & 00 & 01 & 02):
http://amsci-forum.amsci.org/archives/september98-forum.html
or
http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Amsci/index.html
Discussion can be posted to: september98-forum at amsci-forum.amsci.org
See also the Budapest Open Access Initiative:
http://www.soros.org/openaccess
the Free Online Scholarship Movement:
http://www.earlham.edu/~peters/fos/timeline.htm
the SPARC position paper on institutional repositories:
http://www.unites.uqam.ca/src/sante.htm
the OAI site:
http://www.openarchives.org
and the free OAI institutional archiving software site:
http://www.eprints.org/
More information about the SIGMETRICS
mailing list