Australia's RQF

Stevan Harnad harnad at ECS.SOTON.AC.UK
Fri Nov 17 20:32:17 EST 2006


Linda Butler's follow-up posting is very sensible: metrics, but under 
the scrutiny, for the time being, of panels. Australia is indeed being
both independent and innovative here. (And of course everything Linda
describes is perfectly congruent with what Arthur Sale described. The
niggles were about the weasel-words "quality" and "impact" -- both
vague, and slippery, with nothing much hanging on the distinction 
and examples, as formulated!)

Stevan Harnad

On Sat, 18 Nov 2006, Linda Butler wrote:

> I think it is important to take account of what "phase", to use  
> Stevan's term, each country is at.
> 
> The UK is coming off several cycles of a traditional peer-review RAE,  
> and is rightly questioning the cost of continuing with such an  
> intensive process.  It is hard to see the justification for  
> continuing with the same system.
> 
> HOWEVER, the UK does have the experience of several iterations of the  
> RAE, and if the move to metrics throws up any anomalies, they will be  
> relatively easy to detect.
> 
> In contrast, Australia has had around 12 years of funding on the  
> basis of a blunt formula that has little (I'm being generous here) to  
> do with the quality of research.  We don't have the UK's extensive  
> knowledge set on the relative strengths and weaknesses of our  
> departments.
> 
> Some metrics may correlate well in some disciplines (maybe even  
> many), but the correlation is rarely perfect.  'Quality' is a complex  
> notion, and I don't share others' confidence that we can do away with  
> the peer review element entirely.  However, that does not mean the  
> time-consuming assessment of a huge bundle of research outputs.  What  
> it does mean is that you still need a panel of experts to examine and  
> interpret the metrics.
> 
> For some panels, it might only take 5 minutes - for example, it's  
> hard to see a bibliometric analysis of astronomy throwing up any  
> surprises.
> 
> But for others there might need to be considerably more effort.  I'm  
> currently working on an analysis of the 2002 RAE results in Political  
> Science.  If anyone can come up with any metric that would give War  
> Studies at Kings College London a 5* rating I would be genuinely  
> interested to hear.
> 
> Australia is not moving "holus-bolus" to a UK RAE-clone.  We can't  
> afford it.  We need metrics to help lighten the load on panels.  But,  
> at least for the first exercise, we need more than that.
> 
> Add to that the fact that Australia is the first to systematically  
> attempt to assess the "impact" (i.e. outside academia) of its  
> research, and what is being proposed is hugely innovative.  Lots of  
> research councils, funding bodies, and others around the world are  
> keen to see how this pans out - that is what they are increasingly  
> seeking to do.  And in Australia it is essential.  There is a strong  
> belief that merely demonstrating the quality of our research will not  
> bring extra funds from government, as it has done in the UK.  Rather  
> what our government is looking for is a demonstration of the impact  
> of that research in the wider community.  It happens in spades - it's  
> just that researchers/universities/etc have not been particularly  
> successful at demonstrating the extent of it.  Hopefully the RQF may  
> go some way to addressing that and we may eventually get more  
> research funding - we desperately need it.
> 
> In the meantime, much work is being done in Australia to develop new  
> metrics in those disciplines where the standard ones are not  
> appropriate, and we have the strong support of researchers in these  
> disciplines.  So watch this space - Australia is not behind the game  
> - it is ahead of it.  Doesn't harm to be a bit provocative!!!
> 
> Linda Butler
> Research Evaluation and Policy Project
> ANU
> 
> 
> On 17/11/2006, at 11:34 PM, Stevan Harnad wrote:
> 
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > On Fri, 17 Nov 2006, C.Oppenheim wrote:
> >
> >> Whilst it is true the UK is dropping peer-assessed RAE in favour  
> >> of metrics,
> >> I doubt the reasoning was that it was convinced by the correlation  
> >> between
> >> RAE scores and metrics.  I think the reason was to reduce the  
> >> costs and
> >> burden of the exercise.
> >
> > I am certain Charles is right. The panel re-reviewing was costly and
> > burdensome, and it was not scientometric sophistication and prescience
> > that drove the very sensible decision to scrap the panels for metrics,
> > but economics and ergonomics. Evidence that it was not scientometric
> > sagesse is that the panel-scrappers were ready to jump headlong into
> > the use of prior-funding metrics alone (which in some fields correlate
> > almost 100% with the panel rankings).
> >
> > That would have been foolish in the extreme, generating a whopping  
> > Matthew
> > Effect (prior funding can be and is explicitly counted by the panels,
> > whereas citation-counting has been forbidden!), and reducing the UK
> > Dual Funding System -- (1) RCUK-based competitive proposals plus (2)
> > RAE-based top-sliced performance-based funding -- to just the one form
> > of funding (1). And it certainly would not have even been possible
> > in all disciplines.
> >
> > Fortunately, UUK (and others) objected, and it will not be uni-metric
> > uni-funding: Open Access will allow 1000 metric flowers to bloom,
> > and rich discipline-specific bouquets will be picked through objective
> > testing and validation.
> >
> >     "Metrics" are Plural, Not Singular: Valid Objections From UUK  
> > About RAE"
> >     http://openaccess.eprints.org/index.php?/archives/137-guid.html
> >
> > I have no doubt that (with the help of quick-thinkers like Arthur
> > Sale), Australia too will get into phase with these present and future
> > developments.
> >
> > Stevan Harnad
> >
> >> Charles
> >>
> >> Professor Charles Oppenheim
> >> Head
> >> Department of Information Science
> >> Loughborough University
> >> Loughborough
> >> Leics LE11 3TU
> >>
> >> Tel 01509-223065
> >> Fax 01509-223053
> >> e mail C.Oppenheim at lboro.ac.uk
> >> ----- Original Message -----
> >> From: "Stevan Harnad" <harnad at ecs.soton.ac.uk>
> >> To: "ASIS&T Special Interest Group on Metrics"  
> >> <SIGMETRICS at listserv.utk.edu>
> >> Cc: "AmSci Forum" <american-scientist-open-access-forum at amsci.org>
> >> Sent: Friday, November 17, 2006 11:45 AM
> >> Subject: Re: Australia's RQF
> >>
> >>
> >> The UK RAE is planning to scrap the time-consuming and costly panel
> >> re-review of
> >> already-peer-reviewed articles in favour of metrics because  
> >> metrics have
> >> been
> >> shown to correlate highly with the RAE panel rankings anyway  
> >> (although it is
> >> not
> >> yet decided what combination of metrics will be appropriate to each
> >> discipline).
> >>
> >>     Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated
> >>     online RAE CVs Linked to University Eprint Archives: Improving
> >>     the UK Research Assessment Exercise whilst making it cheaper and
> >>     easier. Ariadne 35 (April 2003).
> >>     http://www.ariadne.ac.uk/issue35/harnad/
> >>
> >>     Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open
> >>     Research Web: A Preview of the Optimal and the Inevitable, in  
> >> Jacobs,
> >>     N., Eds. Open Access: Key Strategic, Technical and Economic  
> >> Aspects,
> >>     chapter 20. Chandos.   http://eprints.ecs.soton.ac.uk/12453/
> >>
> >> I trust that Australia's RQF is not going to mechanically  
> >> recapitulate
> >> the many years that the RAE wasted of its researchers' time  
> >> submitting for
> >> and performing panel re-review. RQF plans are probably just a bit  
> >> out of
> >> phase
> >> right now, and Australia will catch up in time for its first RQF  
> >> exercise
> >> or soon thereater. By then the arbitrary constraint of submitting  
> >> only
> >> 4 papers will also be mooted by Open Access submission of all  
> >> research
> >> output self-archived in each instution's Institutional Repository.
> >>
> >> See remarks below.
> >>
> >> On Fri, 17 Nov 2006, Linda Butler wrote:
> >>
> >>> Many of Arthur Sale's points about
> >>> the Australian RQF, particularly in relation to IRs and the way  
> >>> in which
> >>> panels
> >>> will access submitted publications, are accurate. However, his
> >>> "definition" of
> >>> quality and impact in the RQF context is seriously misleading.  
> >>> Yes, the
> >>> terms
> >>> are used in an unusual way, but his attempt to paraphrase the  
> >>> meaning is
> >>> way
> >>> off. The definitions contained in the official document are:
> >>>
> >>> the quality of original research including its intrinsic merit and
> >>> academic
> >>> impact. Academic impact relates to the recognition of the  
> >>> originality of
> >>> research by peers and its impact on the development of the same  
> >>> or related
> >>> discipline areas within the community of peers;
> >>
> >> That, presumably, is what journal peer review has already done for a
> >> researcher's published papers. Journals differ in their peer-review
> >> quality standards, but that too can be triangulated via metrics. The
> >> best of journals will have refereed their content by consulting the
> >> top experts in each subspecialty, worldwide, not an assembled  
> >> panel of
> >> national representatives to the discipline from the UK or AUSTRALIA,
> >> re-reviewing all content in their discipline.
> >>
> >> The fact that the RAE panels (having wasted the researchers' time
> >> and their own in re-reviewing already peer-reviewed publications)
> >> nevertheless come up rankings that agree substantially with  
> >> metrics --
> >> with prior funding counts, regrettably, because those probably  
> >> explicitly
> >> influenced their rankings, but also with citation counts, which they
> >> are explicitly forbidden to consult, hence showing that human  
> >> judgment
> >> in skimming and ranking the 4 papers per researcher averages out  
> >> to the
> >> same outcome as the human judgment involved in deciding what to  
> >> cite --
> >> is another indication that the panel review is superfluous in most of
> >> the disciplines tested so far. For some disciplines new combinations
> >> of metrics will no doubt have to be tested and validated, and that is
> >> partly the reason the next RAE will be a parallel panel/metric  
> >> exercise,
> >> to cross-validate the metric rankings with the panel rankings.
> >>
> >>> the impact or use of original research outside the peer community  
> >>> that
> >>> will
> >>> typically not be reported in traditional peer reviewed literature  
> >>> (that
> >>> is,
> >>> the extent to which research is successfully applied during the  
> >>> assessment
> >>> period for the RQF).
> >>
> >> Sounds like another candidate metric...
> >>
> >>> Broader impact relates to the recognition by qualified end users  
> >>> that
> >>> methodologically sound and rigorous research has been  
> >>> successfully applied
> >>> to achieve social, economic, environmental and/or cultural outcomes.
> >>
> >> Sounds again like peer review, in the case of peer-reviewed  
> >> publication. For
> >> unpublished research, other metrics (patents, downloads) are  
> >> possible. There
> >> may be a few specialties in which human evaluation is the only  
> >> option, but
> >> that tail should certainly not be allowed to wag the RQF dog:  
> >> Specific
> >> exceptions can simply be made for those specialities, until and  
> >> unless a
> >> valid combination of metrics is found.
> >>
> >>> Quality is NOT a solely metrics-based exercise.It is the peer  
> >>> assessment
> >>> of 4
> >>> outputs per active researcher (as in the RAE), informed by  
> >>> quantitative
> >>> indicators supplied to the panel (citations, competitive grants,  
> >>> ranked
> >>> outputs
> >>> - details of proposed measures are on the DEST website in the  
> >>> background
> >>> papers).
> >>
> >> Quality has already been peer-reviewed for peer-reviewed  
> >> publications, hence
> >> panel re-review is not recourse to metrics instead of human  
> >> judgment, it is
> >> an
> >> exercise in (blunt) redundancy (in most cases).
> >>
> >>> Impact, the most difficult to assess, is judged from an "evidence- 
> >>> based
> >>> statement of claims". Obviously, there is a lot of detail behind  
> >>> that
> >>> statement - again, background papers are available on the DEST  
> >>> website. It
> >>> will definitely not be judged in the way outlined below.
> >>
> >> "Evidence-based statement of claims": Sounds like a tall order for a
> >> small panel of national peers hand-re-reviewing a set of mostly  
> >> already
> >> peer-reviewed papers, and a time-consuming one. Let's hope the RQF  
> >> will
> >> learn from the RAE's long, costly and wasteful history, rather  
> >> than just
> >> repeating it. The growing body of Open Access scientometrics that  
> >> will
> >> become available in coming years will make it possible for  
> >> enterprising
> >> data-miners (possibly PGs of the same researchers that are wasting
> >> their research time submitting to and performing the panel  
> >> reviews) to
> >> demonstrate prominently just how redundant the panel rankings  
> >> really are.
> >>
> >> Pertinent Prior American Scientist Open Access Forum Topic Threads:
> >>
> >> UK "RAE" Evaluations (began Nov 2000)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1018
> >>
> >> Scientometric OAI Search Engines (began Aug 2002)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2238
> >>
> >> Australia stirs on metrics (June 2006)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5417.html
> >>
> >> Big Brother and Digitometrics (began May 2001)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#1298
> >>
> >> UK Research Assessment Exercise (RAE) review (began Oct 2002)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2326
> >>
> >> Need for systematic scientometric analyses of open-access
> >> data (began Dec 2002)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2522
> >>
> >> Potential Metric Abuses (and their Potential Metric
> >> Antidotes) (began Jan 2003)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#2643
> >>
> >> Future UK RAEs to be Metrics-Based (began Mar 2006)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/subject.html#5251
> >>
> >> Let 1000 RAE Metric Flowers Bloom: Avoid Matthew Effect as
> >> Self-Fulfilling Prophecy (Jun 2006)
> >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/5418.html
> >>
> >> Stevan Harnad
> >>
> >>> Linda Butler
> >>> Research Evaluation and Policy Project
> >>> The Australian National University
> >>>
> >>> At 03:34 PM 17/11/2006, you wrote:
> >>>       Adminstrative info for SIGMETRICS (for example unsubscribe):
> >>>       http://web.utk.edu/~gwhitney/sigmetrics.html
> >>>
> >>>       ---------- Forwarded message ----------
> >>>       Date: Fri, 17 Nov 2006 14:44:39 +1100
> >>>       From: Arthur Sale <ahjs at ozemail.com.au>
> >>>       To: AMERICAN-SCIENTIST-OPEN-ACCESS- 
> >>> FORUM at LISTSERVER.SIGMAXI.ORG
> >>>
> >>>       The Australian Government has released a definitive, if  
> >>> incomplete,
> >>>       description of Australia's Research Quality Framework (RQF)  
> >>> which
> >>>       is our
> >>>       equivalent of the UK's RAE. If familiar with the RAE, you will
> >>>       recognize the
> >>>       family resemblance. I extract the essentials of the RQF for an
> >>>       international
> >>>       readership, and analyze some of the consequences likely to  
> >>> flow
> >>>       from it. To
> >>>       see the documentation, see
> >>>
> >>> http://www.dest.gov.au/sectors/research_sector/ 
> >>> policies_issues_reviews/key_i
> >>>       ssues/research_quality_framework/rqf_development_2006.htm.
> >>>
> >>>       ESSENTIAL POINTS
> >>>
> >>>       1. The first RQF assessment will be based on submissions by
> >>>       the 38
> >>>       Australian universities by 30 April 2008. Funding based on the
> >>>       assessment
> >>>       will flow in calendar year 2009. Six years will elapse  
> >>> before the
> >>>       next
> >>>       assessment (ie 2014), but there is provision to shorten this.
> >>>
> >>>       2. The Unit of Assessment is the Research Group. Research
> >>>       Groups will
> >>>       be defined by up to three RFCD four-digit codes (to allow for
> >>>       multi-disciplinary groups). The RFCD classification is  
> >>> uniquely
> >>>       Australian,
> >>>       and for example there are six four-digit codes in the field  
> >>> of ICT.
> >>>       Engineering has more but for example Civil Engineering is  
> >>> one. If
> >>>       you are
> >>>       interested in the codes see
> >>>       http://www.research.utas.edu.au/publications/docs/ 
> >>> 14_rfcd.doc, the
> >>>       four
> >>>       digit codes are the sub-headings.
> >>>
> >>>       3. Each Research Group will be allocated to and assessed by
> >>>       one of 13
> >>>       Panels. The Panel is determined by the primary RFCD code. Thus
> >>>       Mathematics,
> >>>       Computing and Information Technology is Panel 4.
> >>>
> >>>       4. Each University will submit an Evidence Portfolio (EP) for
> >>>       each
> >>>       identified Research Group. There is provision for cross- 
> >>> university
> >>>       Research
> >>>       Groups.
> >>>
> >>>       5. The ratings will be based on Quality and Impact separately.
> >>>       These
> >>>       words have peculiar (ie not common-usage) meanings.  
> >>> Approximately,
> >>>       Quality
> >>>       is a bag of quantifiable metrics, and Impact is all the  
> >>> soft things
> >>>       like
> >>>       Fellowships of Academies, Honors, journal associate  
> >>> editorships,
> >>>       etc. The
> >>>       relative importance of Quality and Impact will vary by  
> >>> Panel and is
> >>>       similarly not yet resolved. Quality is based on the best four
> >>>       publications
> >>>       (Research Output) of each researcher in the group over the six
> >>>       years
> >>>       2002-2007, on a full list of all Research Output from the  
> >>> group
> >>>       including
> >>>       honorary and emeritus professors, and on competitive grants
> >>>       received over
> >>>       the period. Impact is covered in the Context Statement of  
> >>> the EP
> >>>
> >>>       6. Impact for each Research Group will be assessed on a scale
> >>>       of 1 (not
> >>>       important) to 5 (prestigious)..
> >>>
> >>>       7. Impact is rated A (outstanding) to E (poor).
> >>>
> >>>       8. Research Groups which rate below 2 for Quality, or below D
> >>>       for
> >>>       Impact, will attract no funding to their university, though  
> >>> the two
> >>>       factors
> >>>       are separately aggregated for the University. The weighting of
> >>>       funding is
> >>>       stated to be linear with rating, but the gradient will be
> >>>       determined during
> >>>       2007.
> >>>
> >>>       9. The Panels require access to the electronic versions of any
> >>>       of the
> >>>       Research Output within four working days. The Panels will  
> >>> (a) rank
> >>>       the
> >>>       outputs by things like journal impact factors, journal  
> >>> standing,
> >>>       etc, (b)
> >>>       assess citation counts, both in aggregate and by the  
> >>> percentage
> >>>       that fall in
> >>>       the top decile for the discipline, and (c) competitive grant
> >>>       income.
> >>>
> >>>       10. The RQF is based on a semi-centralized IT model (or
> >>>       semi-decentralized). In other words, the full-texts of the  
> >>> research
> >>>       outputs
> >>>       (publications) will be held in IRs in each university,  
> >>> while the
> >>>       RQF
> >>>       secretariat will run a repository with all the EPs and  
> >>> develop the
> >>>       citation
> >>>       counts independent of the universities (in conjunction with  
> >>> Thomson
> >>>       Scientific and possibly EndNote Web). The Australian  
> >>> Government
> >>>       will be
> >>>       approached for funds to universities to establish these IRs.
> >>>
> >>>       ANALYSIS FOR OPEN ACCESS
> >>>
> >>>       * The RQF will actually use citation metrics in the
> >>>       assessment, not
> >>>       just test them as a "shadow exercise" as in the next RAE.  
> >>> This will
> >>>       mean
> >>>       that the OA citation advantage will suddenly look very  
> >>> attractive
> >>>       to
> >>>       Australian universities, though it is a bit late to do  
> >>> anything
> >>>       about it
> >>>       five years into a six-year window. However, with 2014 in mind,
> >>>       there will be
> >>>       pressure to increase citations.
> >>>
> >>>       * Every university will have to have an IR to hold the
> >>>       full-text of
> >>>       Research Outputs. About half already do, with EPrints and  
> >>> DSpace
> >>>       being the
> >>>       most popular software with a few Fedora-based repositories and
> >>>       outsourced
> >>>       ProQuest hosts. There will be funding to establish  
> >>> repositories.
> >>>
> >>>       * I expect a mad scramble in the smaller universities, with
> >>>       outsourcing and hosting solutions being very attractive. Money
> >>>       fixes
> >>>       everything. The ones that have been dithering will regret it.
> >>>
> >>>       * All Research Output generated by all Research Groups will
> >>>       have to
> >>>       be in the IRs for the RQF. This may amount to 50% of the  
> >>> university
> >>>       research
> >>>       production over six years, or more or less depending on how
> >>>       research
> >>>       intensive it is. There are two corollaries: (a) this is  
> >>> Mandate by
> >>>       Money,
> >>>       and (b) there will be frantic activity over 2007 to put in the
> >>>       backlog of
> >>>       2002-2006 publications.
> >>>
> >>>       * Since one does not know what Research Output will be
> >>>       needed in
> >>>       2014, and only a general clue in 2007, 100% institutional  
> >>> mandates
> >>>       are
> >>>       likely to spring up all over the place, in the form of  
> >>> Mandate by
> >>>       Administration. What I mean by this is that the deposition  
> >>> of the
> >>>       paper will
> >>>       be integrated with the already present administrative annual
> >>>       requirement to
> >>>       report the publication to the Australian Government.
> >>>
> >>>       * Although it is nowhere stated explicitly that I can see, I
> >>>       read
> >>>       between the lines that the RQF may be expecting to get  
> >>> access to
> >>>       the
> >>>       publisher's pdf. This means that it will have to be in the
> >>>       repository as
> >>>       "restricted access" in most cases or as a link to an OA  
> >>> source.
> >>>       There is no
> >>>       reason why the OA postprint cannot be there as "open  
> >>> access" as
> >>>       well, of
> >>>       course, and if a citation advantage is to be got, it will  
> >>> need to
> >>>       be.
> >>>
> >>>       Please feel free to blog this or forward this to anyone you  
> >>> think
> >>>       may be
> >>>       interested. My apologies for cross-posting.
> >>>
> >>>       Arthur Sale
> >>>       Professor of Computing (Research)
> >>>       University of Tasmania
> >>>
> >>> Linda Butler
> >>> Research Evaluation and Policy Project
> >>> Research School of Social Sciences
> >>> The Australian National University
> >>> ACT 0200 Australia
> >>> Tel: 61 2 61252154 Fax: 61 2 61259767
> >>> http://repp.anu.edu.au
> >>>
> >>>
> >>>
> >>
> >>
> >>
> 



More information about the SIGMETRICS mailing list