RAE Questions

Loet Leydesdorff loet at LEYDESDORFF.NET
Tue Apr 4 17:15:40 EDT 2006


Yes, Stephen, I meant your mobility issue. Thanks for the correction. My
points were also mainly questions in response to the plea for a metrics
program as a replacement for the RAE (without defending the latter in any
sense). The idea of a multi-variate regression is attractive, but there are
some unsolved problems which Steven Harnad thinks that can easily be solved
or dismissed.

Best,  Loet

________________________________
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam.
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681;
loet at leydesdorff.net ; http://www.leydesdorff.net/



> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
> Sent: Tuesday, April 04, 2006 10:29 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] RAE Questions
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> It seems that I have walked into the middle of a discussion
> that I do not understand.  I just want to correct one thing.
> The following comment was made below:
>
> "Stephen Bensman also mentioned the instability of these
> skewed curves over time. I would anyhow be worried about the
> comparisons over time because of auto-correlation
> auto-covariance) effects."
>
> I did not state that.  I stated that these skewed curves are
> highly stable over time with high intertemporal correlations
> and the same programs comprising the top stratum for decades.
>  For example, the ten chemistry programs most highly rated in
> 1910 were still among the top 15 programs most highly rated
> in 1993.  It is interesting to note that Garfield found the
> same phenomenon in respect to journals, which have the same
> high distributional stability over time.  This is probably
> due to the cumulative advantage process underlying both
> phenomena.  I suppose it leads to high auto-correlation also.
>  From this perspective RAEs every four years seem somewhat of
> a redundancy.  You might as well give the money to the same
> departments you found at the top in the previous rating
> without any analysis.  My main concern was not about the
> stability of the hierarchy--which is a given--but about
> mobility of individuals within the hierarchy.  There is
> nothing more self-destructive than a closed hierarchy.
> It leads to class war of the worse kind.
>
> Really I was only speculating, musing about negative
> possibilities that I perceived while reading about the RAEs.
> I was really raising questions more than answering them.
>
> SB
>
>
>
>
> Stevan Harnad <harnad at ECS.SOTON.AC.UK>@listserv.utk.edu> on 04/04/2006
> 02:34:21 PM
>
> Please respond to ASIS&T Special Interest Group on Metrics
>        <SIGMETRICS at listserv.utk.edu>
>
> Sent by:    ASIS&T Special Interest Group on Metrics
>        <SIGMETRICS at listserv.utk.edu>
>
>
> To:    SIGMETRICS at listserv.utk.edu
> cc:     (bcc: Stephen J Bensman/notsjb/LSU)
>
> Subject:    Re: [SIGMETRICS] RAE Questions
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> On Tue, 4 Apr 2006, Loet Leydesdorff wrote:
>
> > Now that we have amply discussed the political side of the
> RAE, let us
> turn
> > to your research program of replacing the RAE with a metrics.
>
> Loet, we can discuss my research progam if you like, but we
> were not discussing that. We were discussing the UK
> government's proposed policy of replacing the RAE with
> metrics. That has nothing to do with my research program.
> They decided to switch from the present hybrid system (of
> re-reviewing published articles plus some metrics) to metrics
> alone because metrics alone are already so highly correlated
> with the current RAE outcomes (in many, though not
> necessarily all fields). No critique of metrics over-rides
> that decision where the two are already so highly correlated.
> It would be pure superstition to continue going through the
> ergonomically and eocnomically wasteful motions of the
> re-review when the outcome is already there in the metrics.
>
> > Two problems have been mentioned which cannot easily be solved:
> >
> > 1. the skewness of the distributions
>
> I think there are ways to adjust for this.
>
> > 2. the heterogeneity of department as units of analysis
>
> That is a separate matter. The proposal to swap metrics alone
> for a redundant, expensive, time-consuming hybrid process
> that yields the same outcome was based on the units of
> analysis as they now are. The units too could be revised, and
> perhaps should be, but that is an independent question.
>
> > The first problem can be solved by using non-parametric regression
> analysis
> > (probit or logit) instead of multi-variate regression
> analysis of the
> LISREL
> > type. However, will this provide you with a ranking? I
> cannot oversee
> > it because I never did it myself.
>
> The present RAE outcome (rankings) is highly correlated with
> metrics already. If we correct the metrics for skewness, this
> may continue to give the same highly correlated outcome, or
> another one. RAE can then decide which one it wants to trust
> more, and why, but either way, it has no bearing on the
> validity of the decision to scrap re-reviews for metrics when
> they give almost the same outcome anyway.
>
> > Stephen Bensman also mentioned the
> > instability of these skewed curves over time. I would anyhow be
> > worried about the comparisons over time because of auto-correlation
> > (auto-covariance) effects.
>
> Whatever their skewness, temporal variability and
> auto-correlation, the ranking based on metrics are very
> similar to the rankings based on re-review. The starting
> point is to have a metric that does *at least as
> well* as the re-review did, and then to start work on
> optimizing it. Let us not forget the real alternatives at
> issue. As I said, it would be superstitious and absurd to go
> back from cheap metrics to profligate re-reviews because of
> putative blemishes in the metrics *when both yield the same outcome*.
>
> > I have run into these problems before, and therefore I am a
> big fan of
> > entropy statistics. But policy makers tend not to understand the
> > results
> if
> > one can teach them something about "reduction of the uncertainty".
> > They
> will
> > wish firm numbers to legitimate decisions.
>
> If policy makers have been content to rank the departments
> and shell out the money in proportion with the ranks for two
> decades now, and those ranks are derivable from cheap metrics
> instead of costly re-reviews, they will understand enough to
> know they should go with metrics. Then you can give them a
> course on how to improve on their metrics with "entropy statistics".
>
> > The second problem is generated because you will have institutional
> > units
> of
> > analysis which may be composed of different disciplinary
> affiliations
> > and
> to
> > a variable extent.
>
> That is already true, and it is true regardless of whether
> the RAE does or does not do the re-review over and above the
> metrics which are already highly correlated with the outcome.
> If rejuggling units improves the equity and predictivity of
> the rankings, by all means rejuggle them. But in and of
> itself that has nothing to do with the obvious good sense of
> scrapping profligate re-review in favour of parsimonious
> metrics when they yield the same outcome -- even with the
> present unit structure.
>
> > For example, I am myself misplaced in a unit of
> communication studies.
> > In other cases, universities will have set up "interdisciplinary
> > units" on purpose while individual scholars continue
> to
> > affiliate themselves with their original disciplines. We know that
> > publication and citation practices vary among disciplines. Thus, one
> should
> > not compare apples with oranges.
>
> It sounds worth remedying, but the question is orthogonal to
> the question of whether to retain wasteful re-review or to
> rely on metrics that give the same outcome at a fraction of
> the cost in lost time and money (that could have been devoted
> to funding research instead of just rating it).
>
> > I would be inclined to disadvise to embark on this research project
> before
> > one has an idea of how to handle these two problems. Fortunately, I
> > was
> not
> > the reviewer :-).
>
> I am not sure which research project you are talking about.
> (I was just funded for a metrics project in Canada, but it
> has nothing to do with the RAE. The RAE, in contrast, has
> elected to scrap re-review in favour of the metrics that
> already yield the same outcome, but that has nothing to do
> with my research project.)
>
> Stevan Harnad
> American Scientist Open Access Forum
> http://amsci-forum.amsci.org/archives/American-Scientist-Open-
> Access-Forum.html
>
>
> Chaire de recherche du Canada           Professor of Cognitive Science
> Ctr. de neuroscience de la cognition    Dpt. Electronics &
> Computer Science
> Université du Québec à Montréal         University of Southampton
> Montréal, Québec                        Highfield, Southampton
> Canada  H3C 3P8                         SO17 1BJ United Kingdom
> http://www.crsc.uqam.ca/
> http://www.ecs.soton.ac.uk/~harnad/
>



More information about the SIGMETRICS mailing list