RAE Questions

Stephen J Bensman notsjb at LSU.EDU
Wed Apr 5 17:18:06 EDT 2006


No sweat.  I say so many bad things that I make it a principle never to
skip a chance to say something nice.

SB




Loet Leydesdorff <loet at LEYDESDORFF.NET>@LISTSERV.UTK.EDU> on 04/05/2006
10:18:42 AM

Please respond to ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>

Sent by:    ASIS&T Special Interest Group on Metrics
       <SIGMETRICS at LISTSERV.UTK.EDU>


To:    SIGMETRICS at LISTSERV.UTK.EDU
cc:     (bcc: Stephen J Bensman/notsjb/LSU)

Subject:    Re: [SIGMETRICS] RAE Questions


Dear Stephen: Thank you so much for your nice words. With best wishes and
in
friendship,  Loet

> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
> Sent: Wednesday, April 05, 2006 3:09 PM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] RAE Questions
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Loet,
> The one reason I am so fascinated with your work is that you
> deal with the
> fundamental problem--proper set definition for the analysis.
>  This is the
> most difficult and, for me, the most subjective, value-laden
> part of the analysis.  Once people can agree on these, the
> rest is proper statistical technique, provided you use
> multiple measures that can cross-check each other--expert
> ratings, citations, library use, Internet use, etc.  It is a
> fundmental mistake to use citation analysis by itself,
> particularly since it seems that ISI data are dominated by
> the citation patterns of the US academic social
> stratification system.  This may make it invalid for other
> areas.  It is also necessary to be aware that there is not
> just one answer but multiple ones depending on your
> objectives, etc.  The US NRC ratings were a broad brush
> effort to determine the importance of programs in disciplines
> as whole and not in specific subsets, which can be crucial.
> For example, one specific subset that was not well covered by
> the NRC ratings was how to deal with wetlands, river delta
> areas, flood control, coastal zone areas, etc.  I think that
> you would find that perhaps the Netherlands would rank the
> highest in this subset, and this is of the utmost interest to
> Louisiana now.
>
> I still think that the questions I raised about the RAEs are
> valid ones.
>
> SB
>
>
>
>
>
>
>
>
> Loet Leydesdorff <loet at LEYDESDORFF.NET>@LISTSERV.UTK.EDU> on
> 04/05/2006
> 01:25:58 AM
>
> Please respond to ASIS&T Special Interest Group on Metrics
>        <SIGMETRICS at LISTSERV.UTK.EDU>
>
> Sent by:    ASIS&T Special Interest Group on Metrics
>        <SIGMETRICS at LISTSERV.UTK.EDU>
>
>
> To:    SIGMETRICS at LISTSERV.UTK.EDU
> cc:     (bcc: Stephen J Bensman/notsjb/LSU)
>
> Subject:    Re: [SIGMETRICS] RAE Questions
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> Dear Stephen and colleagues,
>
> There are legitimate uses of these measures like, for
> example, learning faculty how to consider their own position
> in the literature. This may enable them to improve the
> quality and visibility of their contributions, to reorganize
> units, etc. The other legitimate use, of course, is our
> scholarly communication about how to study these bibliometric
> tools and how to use them as variables in a model of how the
> sciences (and technologies) develop.
>
> A number of problems of these measures in policy processes
> have now been listed. I want to add one: As long as we are
> not able to rank document sets (e.g., journals) clearly, it
> remains tricky to make an inference to authors and
> institutions. OA will not help solving these problems. :-)
>
> With best wishes,
>
>
> Loet
>
> ________________________________
> Loet Leydesdorff
> Amsterdam School of Communications Research (ASCoR),
> Kloveniersburgwal 48, 1012 CX Amsterdam.
> Tel.: +31-20- 525 6598; fax: +31-20- 525 3681;
> loet at leydesdorff.net ; http://www.leydesdorff.net/
>
>
>
> > -----Original Message-----
> > From: ASIS&T Special Interest Group on Metrics
> > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen J Bensman
> > Sent: Wednesday, April 05, 2006 3:44 AM
> > To: SIGMETRICS at LISTSERV.UTK.EDU
> > Subject: Re: [SIGMETRICS] RAE Questions
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > I have given lectures to faculty on the use of citation
> analysis for
> > purposes of faculty evaluation.  I have always prefaced
> these lectures
> > with the comment that I feel guilty in that I may be
> passing out hand
> > grenades to  kindergarten students.  Citation analysis can be
> > extraordinarily destructive if misapplied.  If you take just one
> > department, you can see the problem.  Even in a department
> covering a
> > relatively homogeneous field, you have professors engaged
> in different
> > specialities of differing size.
> > Then you have the problem of differing professional age.
> Due to these
> > factors you cannot use raw citation counts, but must compare
> > professors to an outside set of the same subject specialty and same
> > professional age.
> > Then you have to standardize the scores for comparative purposes.
> > Just defining the subject set can cause horrendous difficulties, as
> > certain professors may consider a given professor's subject set
> > insignificant in the first place and unworthy of even being
> pursued.
> > I mean, you should already see the difficulties.  I have
> always come
> > back from these experiences clawed to pieces and seeking a hole in
> > which to hide.  It is more politics and art form than a
> science.  The
> > trouble with a thing like the NRC ratings is that it works on gross
> > parameters and misses certain strengths.  For example, LSU is not
> > highly rated in history and English, but change the sets to
> Southern
> > history and Southern literature, and it suddenly comes out
> on top.  I
> > am sure that you can invent even more difficulties.  It is good to
> > study these things, but it is best to analyze people as individuals
> > rather than in aggregate.  Use of citation analysis is so
> provocative,
> > that I have advised the person in charge of serials
> cancellations not
> > to use impact factor in any way to analyze journals lest he
> be killed
> > by the faculty and, if he does, to hide the fact that he is
> doing so.
> > Facutly do not like outsiders with measures they consider
> questionable
> > sticking their noses in what they consider their business.
> >
> > SB
> >
> >
> >
> >
> >
> >
> > Loet Leydesdorff <loet at LEYDESDORFF.NET>@listserv.utk.edu> on
> > 04/04/2006 04:15:40 PM
> >
> > Please respond to ASIS&T Special Interest Group on Metrics
> >        <SIGMETRICS at listserv.utk.edu>
> >
> > Sent by:    ASIS&T Special Interest Group on Metrics
> >        <SIGMETRICS at listserv.utk.edu>
> >
> >
> > To:    SIGMETRICS at listserv.utk.edu
> > cc:     (bcc: Stephen J Bensman/notsjb/LSU)
> >
> > Subject:    Re: [SIGMETRICS] RAE Questions
> >
> > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > http://web.utk.edu/~gwhitney/sigmetrics.html
> >
> > Yes, Stephen, I meant your mobility issue. Thanks for the
> correction.
> > My points were also mainly questions in response to the plea for a
> > metrics program as a replacement for the RAE (without defending the
> > latter in any sense). The idea of a multi-variate regression is
> > attractive, but there are some unsolved problems which
> Steven Harnad
> > thinks that can easily be solved or dismissed.
> >
> > Best,  Loet
> >
> > ________________________________
> > Loet Leydesdorff
> > Amsterdam School of Communications Research (ASCoR),
> Kloveniersburgwal
> > 48, 1012 CX Amsterdam.
> > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681;
> loet at leydesdorff.net ;
> > http://www.leydesdorff.net/
> >
> >
> >
> > > -----Original Message-----
> > > From: ASIS&T Special Interest Group on Metrics
> > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stephen
> J Bensman
> > > Sent: Tuesday, April 04, 2006 10:29 PM
> > > To: SIGMETRICS at LISTSERV.UTK.EDU
> > > Subject: Re: [SIGMETRICS] RAE Questions
> > >
> > > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > > http://web.utk.edu/~gwhitney/sigmetrics.html
> > >
> > > It seems that I have walked into the middle of a discussion
> > that I do
> > > not understand.  I just want to correct one thing.
> > > The following comment was made below:
> > >
> > > "Stephen Bensman also mentioned the instability of these
> > skewed curves
> > > over time. I would anyhow be worried about the comparisons
> > over time
> > > because of auto-correlation
> > > auto-covariance) effects."
> > >
> > > I did not state that.  I stated that these skewed curves
> are highly
> > > stable over time with high intertemporal correlations and
> the same
> > > programs comprising the top stratum for decades.
> > >  For example, the ten chemistry programs most highly
> rated in 1910
> > > were still among the top 15 programs most highly rated in
> > 1993.  It is
> > > interesting to note that Garfield found the same phenomenon
> > in respect
> > > to journals, which have the same high distributional
> stability over
> > > time.  This is probably due to the cumulative advantage process
> > > underlying both phenomena.  I suppose it leads to high
> > > auto-correlation also.
> > >  From this perspective RAEs every four years seem somewhat of a
> > > redundancy.  You might as well give the money to the same
> > departments
> > > you found at the top in the previous rating without any
> > analysis.  My
> > > main concern was not about the stability of the
> > hierarchy--which is a
> > > given--but about mobility of individuals within the
> > hierarchy.  There
> > > is nothing more self-destructive than a closed hierarchy.
> > > It leads to class war of the worse kind.
> > >
> > > Really I was only speculating, musing about negative
> possibilities
> > > that I perceived while reading about the RAEs.
> > > I was really raising questions more than answering them.
> > >
> > > SB
> > >
> > >
> > >
> > >
> > > Stevan Harnad <harnad at ECS.SOTON.AC.UK>@listserv.utk.edu> on
> > 04/04/2006
> > > 02:34:21 PM
> > >
> > > Please respond to ASIS&T Special Interest Group on Metrics
> > >        <SIGMETRICS at listserv.utk.edu>
> > >
> > > Sent by:    ASIS&T Special Interest Group on Metrics
> > >        <SIGMETRICS at listserv.utk.edu>
> > >
> > >
> > > To:    SIGMETRICS at listserv.utk.edu
> > > cc:     (bcc: Stephen J Bensman/notsjb/LSU)
> > >
> > > Subject:    Re: [SIGMETRICS] RAE Questions
> > >
> > > Adminstrative info for SIGMETRICS (for example unsubscribe):
> > > http://web.utk.edu/~gwhitney/sigmetrics.html
> > >
> > > On Tue, 4 Apr 2006, Loet Leydesdorff wrote:
> > >
> > > > Now that we have amply discussed the political side of the
> > > RAE, let us
> > > turn
> > > > to your research program of replacing the RAE with a metrics.
> > >
> > > Loet, we can discuss my research progam if you like, but we
> > were not
> > > discussing that. We were discussing the UK government's proposed
> > > policy of replacing the RAE with metrics. That has nothing
> > to do with
> > > my research program.
> > > They decided to switch from the present hybrid system (of
> > re-reviewing
> > > published articles plus some metrics) to metrics alone
> > because metrics
> > > alone are already so highly correlated with the current RAE
> > outcomes
> > > (in many, though not necessarily all fields). No critique
> > of metrics
> > > over-rides that decision where the two are already so highly
> > > correlated.
> > > It would be pure superstition to continue going through the
> > > ergonomically and eocnomically wasteful motions of the
> > re-review when
> > > the outcome is already there in the metrics.
> > >
> > > > Two problems have been mentioned which cannot easily be solved:
> > > >
> > > > 1. the skewness of the distributions
> > >
> > > I think there are ways to adjust for this.
> > >
> > > > 2. the heterogeneity of department as units of analysis
> > >
> > > That is a separate matter. The proposal to swap metrics
> alone for a
> > > redundant, expensive, time-consuming hybrid process that
> yields the
> > > same outcome was based on the units of analysis as they now
> > are. The
> > > units too could be revised, and perhaps should be, but that is an
> > > independent question.
> > >
> > > > The first problem can be solved by using non-parametric
> regression
> > > analysis
> > > > (probit or logit) instead of multi-variate regression
> > > analysis of the
> > > LISREL
> > > > type. However, will this provide you with a ranking? I
> > > cannot oversee
> > > > it because I never did it myself.
> > >
> > > The present RAE outcome (rankings) is highly correlated
> > with metrics
> > > already. If we correct the metrics for skewness, this may
> > continue to
> > > give the same highly correlated outcome, or another one.
> > RAE can then
> > > decide which one it wants to trust more, and why, but
> > either way, it
> > > has no bearing on the validity of the decision to scrap
> > re-reviews for
> > > metrics when they give almost the same outcome anyway.
> > >
> > > > Stephen Bensman also mentioned the instability of these skewed
> > > > curves over time. I would anyhow be worried about the
> comparisons
> > > > over time because of
> > auto-correlation
> > > > (auto-covariance) effects.
> > >
> > > Whatever their skewness, temporal variability and
> auto-correlation,
> > > the ranking based on metrics are very similar to the
> > rankings based on
> > > re-review. The starting point is to have a metric that does
> > *at least
> > > as
> > > well* as the re-review did, and then to start work on
> > optimizing it.
> > > Let us not forget the real alternatives at issue. As I
> > said, it would
> > > be superstitious and absurd to go back from cheap metrics to
> > > profligate re-reviews because of putative blemishes in
> the metrics
> > > *when both yield the same outcome*.
> > >
> > > > I have run into these problems before, and therefore I am a
> > > big fan of
> > > > entropy statistics. But policy makers tend not to
> understand the
> > > > results
> > > if
> > > > one can teach them something about "reduction of the
> uncertainty".
> > > > They
> > > will
> > > > wish firm numbers to legitimate decisions.
> > >
> > > If policy makers have been content to rank the departments
> > and shell
> > > out the money in proportion with the ranks for two
> decades now, and
> > > those ranks are derivable from cheap metrics instead of costly
> > > re-reviews, they will understand enough to know they
> should go with
> > > metrics. Then you can give them a course on how to
> improve on their
> > > metrics with "entropy statistics".
> > >
> > > > The second problem is generated because you will have
> > institutional
> > > > units
> > > of
> > > > analysis which may be composed of different disciplinary
> > > affiliations
> > > > and
> > > to
> > > > a variable extent.
> > >
> > > That is already true, and it is true regardless of
> whether the RAE
> > > does or does not do the re-review over and above the
> > metrics which are
> > > already highly correlated with the outcome.
> > > If rejuggling units improves the equity and predictivity of the
> > > rankings, by all means rejuggle them. But in and of
> itself that has
> > > nothing to do with the obvious good sense of scrapping profligate
> > > re-review in favour of parsimonious metrics when they yield
> > the same
> > > outcome -- even with the present unit structure.
> > >
> > > > For example, I am myself misplaced in a unit of
> > > communication studies.
> > > > In other cases, universities will have set up
> "interdisciplinary
> > > > units" on purpose while individual scholars continue
> > > to
> > > > affiliate themselves with their original disciplines. We
> > know that
> > > > publication and citation practices vary among
> > disciplines. Thus, one
> > > should
> > > > not compare apples with oranges.
> > >
> > > It sounds worth remedying, but the question is orthogonal to the
> > > question of whether to retain wasteful re-review or to rely
> > on metrics
> > > that give the same outcome at a fraction of the cost in
> > lost time and
> > > money (that could have been devoted to funding research
> instead of
> > > just rating it).
> > >
> > > > I would be inclined to disadvise to embark on this
> > research project
> > > before
> > > > one has an idea of how to handle these two problems.
> > Fortunately, I
> > > > was
> > > not
> > > > the reviewer :-).
> > >
> > > I am not sure which research project you are talking about.
> > > (I was just funded for a metrics project in Canada, but it
> > has nothing
> > > to do with the RAE. The RAE, in contrast, has elected to scrap
> > > re-review in favour of the metrics that already yield the same
> > > outcome, but that has nothing to do with my research project.)
> > >
> > > Stevan Harnad
> > > American Scientist Open Access Forum
> > > http://amsci-forum.amsci.org/archives/American-Scientist-Open-
> > > Access-Forum.html
> > >
> > >
> > > Chaire de recherche du Canada           Professor of
> > Cognitive Science
> > > Ctr. de neuroscience de la cognition    Dpt. Electronics &
> > > Computer Science
> > > Université du Québec à Montréal         University of Southampton
> > > Montréal, Québec                        Highfield, Southampton
> > > Canada  H3C 3P8                         SO17 1BJ United Kingdom
> > > http://www.crsc.uqam.ca/
> > > http://www.ecs.soton.ac.uk/~harnad/
> > >
> >
>



More information about the SIGMETRICS mailing list