Future UK RAEs: Peer review or Metrics-Based

Jonathan Levitt jonathan at LEVITT.NET
Mon Apr 10 14:30:01 EDT 2006


Stavan asked for the rationale for evaluating the referee reports from the journals in which the articles were published.  I suggest that it could provide additional guidance on the quality of the paper paper.  This is akin to judging university entry not solely on quantitative data such as grades, but also on qualitative items such as references.

Best regards,
Jonathan.


  ----- Original Message ----- 
  From: Stevan Harnad 
  To: SIGMETRICS at LISTSERV.UTK.EDU 
  Sent: Sunday, April 09, 2006 3:56 PM
  Subject: Re: [SIGMETRICS] Future UK RAEs: Peer review or Metrics-Based


  Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html 


  On Sun, 9 Apr 2006, Jonathan Levitt wrote:


  > In principle, the sequel to the RAE could take into account the
  > accrual reviews from the journal that published the article 
  > (rather than conduct their own reviews), but in my view there 
  > would need to be stringent checks on their authenticity.


  Submit the referee reports from the journals in which the articles were published? No harm in that, I suppose, but what on earth for? The journals did the peer review, and the attestation to that fact is the published article, the journal name, and the journal's established track record for quality. The rest is down to metrics (journal impact factors, author/article citation counts, downloads, co-citation fan-in/out quality, recursively weighted authority CiteRank, latency/longevity, co-text, etc. etc.). After 25 years of editing a very high impact peer-reviewed journal, Behavioral and Brain Sciences (BBS) -- http://www.bbsonline.org/ -- I cannot see any added benefit (though no harm) from forwarding the referee reports -- and, presumably, the editorial disposition letter -- to a tenure/promotion committee or RAE panel. The peer-review has already performed its function in getting the article suitably revised, accepted, and tagged with the journal's established quality-standard. The referee reports are informative to the editor, but not to others. On the other hand, the next phase -- which my own journal, BBS, pursued very actively and explicitly, namely open peer commentary -- would be extremely informative for those with the time, patience and expertise to read and weigh it. Otherwise, there too, commentary metrics, including commentary-content metrics (+/-/=) will without the slightest doubt emerge from an Open Access full-text database.


  http://www.ecs.soton.ac.uk/%7Eharnad/Temp/Kata/bbs.editorial.html
  http://www.ecs.soton.ac.uk/%7Eharnad/Temp/bbs.valedict.html


  Stevan Harnad




  On 9-Apr-06, at 10:02 AM, Jonathan Levitt wrote:


    Steven wrote "1) peer review has already been done for published articles,
    so the issue is not (i) peer review vs. metrics but (ii)  peer review plus
    metrics vs. peer review plus metrics plus 'peer re-review' (by the RAE
    panels)."  .  To me the issue is not so much peer review vs. metrics, but
    finding a combination of peer review and citation/usage metrics that seems
    particularly likely to be effective at measuring research quality.


    In principle, the sequel to the RAE could take into account the accrual
    reviews from the journal that published the article (rather than conduct
    their own reviews), but in my view there would need to be stringent checks
    on their authenticity.


    Best regards,
    Jonathan.




    ----- Original Message -----
    From: "Stevan Harnad" <harnad at ECS.SOTON.AC.UK>
    To: <SIGMETRICS at LISTSERV.UTK.EDU>
    Sent: Thursday, April 06, 2006 10:56 PM
    Subject: Re: [SIGMETRICS] Future UK RAEs to be Metrics-Based




      Adminstrative info for SIGMETRICS (for example unsubscribe):
      http://web.utk.edu/~gwhitney/sigmetrics.html


      The following article is excellent and accurate overall.


         Cliffe, Rebeca (2006) Research Assessment Exercise: Bowing
         out in Favour of Metrics. EPS Insights: 3 April 2006
         http://technology.guardian.co.uk/weekly/story/0,,1747334,00.html


      One can hardly quarrel with the following face-valid summary from
      this article:


          "The move to a new metrics based system [for RAE] will no doubt please
          those who see a role for institutional repositories in monitoring
          research quality. The online environment has thrown up new metrics,
          which could be used alongside traditional measures such as citations.
          Usage can be measured at the point of consumption -- the number
          of "hits" on a particular article can indicate the uptake of the
          research. Web usage would be expected to be an early indicator of
          how often the article is later cited. Some believe that institutional
          repositories should be used as the basis for ongoing assessment of all
          UK peer-reviewed research output by mandating that researchers should
          place material in repositories. They argue that this would allow
          usage to be measured earlier, through downloads of both pre-prints
          and post-prints. Of course, this course of action would also advance
          the cause of open access by making this research available free."


      But there are a few points of detail on which this otherwise accurate
      report could be made even more useful:


         (1) peer review has already been done for published articles, so
         the issue is not (i) peer review vs. metrics but (ii)  peer review
         plus metrics vs. peer review plus metrics plus "peer re-review"
         (by the RAE panels). It is the re-review of already peer-reviewed
         publications that is the wasteful practice that needs to be scrapped,
         given that peer review has already been done, and that metrics are
         already highly correlated with the RAE ranking outcome anyway.


         (2) For the fields in which the current RAE outcome is not already
         highly correlated with metrics, further work is needed; obviously
         works other than peer-reviewed article or books (e.g., artwork,
         multimedia) will have to be evaluated in other ways, but for
         science, engineering, biomedicine, social science, and most fields
         of humanities, books and articles are the form that research output
         takes, and they will be amenable to the increasingly powerful and
         diverse forms of metrics that are being devised and tested. (Many
         will be tested in parallel with the 2008 RAE, which will still be
         conducted the old, wasteful way; some of the metrics may also be
         testable retrospectively against prior RAE outcomes.)


          "Proponents of a metrics based system point to studies that show
          how average citation frequencies of articles can closely predict
          the scores given by the RAE for departmental quality, even though
          the RAE does not currently count these."


      True, but the highest metric correlate of the present RAE outcome
      is reportedly prior research funding (0.98). Yet it would be a big
      mistake to scrap all other metrics and base the RAE rank on just
      prior funding. That would just generate a massive Matthew Effect and
      essentially make top-sliced RAE funding redundant with direct competitive
      research project funding (thereby essentially "bowing out" of the dual
      RAE/RCUK funding system altogether, reducing it to just research project
      funding). What is remarkable about the high correlation between citation
      counts and RAE ranks (0.7 - 0.9), even though the correlation is not quite
      as high as with prior funding (0.98), is that citations are not presently
      counted in the RAE (whereas prior funding is)! Not only are citations a
      more independent metric of research performance than prior funding, but
      counting them directly -- along with the many other candidate metrics --
      can enrich and diversify the RAE evaluation, rather than just make it
      into a self-fulfilling prophecy.


          "However, metrics tend to work better for the sciences than the
          humanities. Whereas the citation of science research is seen as an
          indicator of the quality and impact of the research, in the humanities
          this is not the case.  Humanities research is based around critical
          discourse and an author may be citing an article simply to disagree
          with its argument."


      I don't think this is quite accurate. It might be true that humanities
      research makes less explicit use of citation counts today than science
      research. It might even be true that the correlation between citation
      counts and research productivity and importance is lower in the
      humanities than in the sciences (though I am not aware of studies to that
      effect). And it may also be true that citation counts in the humanities
      are less correlated with RAE rankings than they are in the sciences. But
      the familiar canard about articles being cited, not because they are
      valid but important, but in order to disagree with them, has too much
      the flavour of the a-priori dismissiveness of citation analysis that we
      hear in *all* disciplines from those who have not really investigated
      it, or the evidence for/against it, but are simply expressing their own
      personal prejudices on the subject.


      Let's see the citation counts for humanities articles and books, and
      their correlation with other performance indicators as well as RAE
      rankings rather than dismissing them a-priori on the basis of anecdotes.


          "Also, an analysis of RAE 2001 submissions revealed that while some
          90% of research outputs listed by British researchers in the fields
          of Physics and Chemistry were mapped by ISI data, in Law the figure
          was below 10%, according to Ian Diamond of the Economic and Social
          Research Council (ESRC) (Oxford Workshop on the use of Metrics in
          Research Assessment)."


      I am not sure what "mapped by ISI data" means, but if it means that ISI
      does not
      cover enough of the pertinent journals in Law, then the empirical question
      is:
      what are the pertinent journals? can citation counts be derived from their
      online
      versions, using the publishers' websites and/or subscribing institutions'
      online
      versions? how well does this augmented citation count correlate with the
      ISI
      subsample (<10%)? and how well do both correlate with RAE ranking? (Surely
      ISI
      coverage should not be the determinant of whether or not a metric is
      valid.)


          "Ultimately, a combination of qualitative and quantitative indicators
          would seem to be the best approach."


      What is a "qualitative indicator"? A peer judgment of quality? But
      that quality judgment has already been made by the peer-reviewers of
      the journal in which the article was published -- and every field has a
      hierarchy of quality among journals that is known (and may even sometimes
      be correlated with the journal's impact factor, if one compares like
      with like in terms of subject matter). What is the point of repeating
      the peer review exercise? And especially if here too it turns out to be
      correlated with metrics? Is it?


          "While metrics are likely to be used to simplify the research
          assessment process, the merits of a qualitative element would be to
          ensure that over-reliance on quantitative factors does not unfairly
          discriminate against research which is of good quality but has not
          been cited as highly as other research due to factors such as its
          local impact."


      Why not ask the panels first to make quality judgments on the journals in
      which
      the papers were published, and then see whether those rankings correlate
      with the
      author/article citation metrics? and whether they correlate with the RAE
      rankings
      based on the present time-consuming qualitative re-evaluations? If the
      correlations prove lower than in the other fields (even when augmented by
      prior
      funding and other metrics) *then* there may be a case for special
      treatment of
      the humanities. Otherwise, the special pleading on behalf of uncited
      research
      sounds as anecdotal, arbitrary and ad hoc as the claim that high citations
      in humanities betoken disagreement rather than usage and importance,
      as in other fields.


          "Unless more appropriate metrics can be developed for the humanities,
          it would seem that an element of expert peer review must remain in
          whatever metrics based system emerges from the ashes of the RAE."


      It has not yet been shown whether the same metrics that correlate highly
      with RAE outcome in other fields (funding, citations) truly fail to
      do so in the humanities. If they do fail to correlate sufficiently,
      there are still many candidate metrics to try out (co-citations,
      downloads, book citations, CiteRank, latency/longevity, exogamy/endogamy,
      hubs/authorities, co-text, etc.) before having to fall back on repeating,
      badly, the peer evaluation that should have already have been done,
      properly, by the journals in which the research first appeared.


      Stevan Harnad


        Research Assessment Exercise:  http://www.rae.co.uk
        Economic and Social Research Council:  http://www.eserc.ac.uk


        Open access: practical matters become the key focus, EPS Insights, 10
        March 2005
        http://www.epsltd.com/accessArticles.asp?articleType=1&updateNoteID=1538


        Citation Analysis in the Open Access World, imi, September 2004
        http://www.epsltd.com/accessArticles.asp?articleType=2&articleID=236&imiID=294




------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition.
  Version: 7.1.385 / Virus Database: 268.4.0/306 - Release Date: 09/04/2006
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20060410/4ec408be/attachment.html>


More information about the SIGMETRICS mailing list