PS Citation statistics

Stevan Harnad harnad at ECS.SOTON.AC.UK
Mon Jun 16 10:26:41 EDT 2008


On 16-Jun-08, at 10:00 AM, Stephen J Bensman wrote:

> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> For the hell of it, I just checked the correlation between peer  
> ratings
> and citation per faculty member in the evaluation of US
> research-doctorate programs conducted by the National Research Council
> in 1993.  It was a mere 0.56--not high enough to solicit much
> confidence.  The corresponding correlations for chemistry and physics
> were 0.81 and 0.70.  These correlations would rise significantly if
> total cites per program were substituted, but it does indicate that  
> math
> is somewhat of a different kettle of fish.

Good evidence for the need to validate a battery of metrics, not just  
citation counts. But certainly not evidence that a battery of metrics  
could not raise that correlation still higher...

Stevan Harnad
American Scientist Open Access Forum
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html

Chaire de recherche du Canada		Professor of Cognitive Science
Institut des sciences cognitives	Electronics & Computer Science
Universite du Quebec a Montreal		University of Southampton
Montreal, Quebec			Highfield, Southampton
Canada  H3C 3P8				SO17 1BJ United Kingdom
http://www.crsc.uqam.ca/		http://users.ecs.soton.ac.uk/harnad/

>
>
> Stephen J. Bensman
> LSU Libraries
> Louisiana State University
> Baton Rouge, LA   70803
> USA
> notsjb at lsu.edu
> -----Original Message-----
> From: Stephen J Bensman
> Sent: Monday, June 16, 2008 8:49 AM
> To: 'ASIS&T Special Interest Group on Metrics'
> Subject: RE: [SIGMETRICS] Citation statistics
>
> In re the discussion here.  Mathematics be a peculiar field in that it
> acts more like a humanities than a science.  The literature cited is
> much older, and cites and library use are distributed much more
> randomly.  Moreover, I have a funny feeling that the impact factor
> distribution may be Poisson or binomial due to the absence of dominant
> review journals due to an inability to form consensual paradigms.  I
> discussed this matter with the chairman of the LSU math department,  
> and
> he stated that mathematicians not only do not know the answers, they  
> do
> not even know the questions.  He also pointed out that it is always
> "Mathematics and the Sciences" in group classifications, indicating  
> that
> math is something different.  It is like "Social and the Behavioral
> Sciences."  If this is the case, and math acts more like a humanities
> than a science, then citation analysis may be out of the question, and
> it is necessary to rely on peer judgment.
>
> Stephen J. Bensman
> LSU Libraries
> Louisiana State University
> Baton Rouge, LA   70803
> USA
> notsjb at lsu.edu
>
> -----Original Message-----
> From: ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad
> Sent: Monday, June 16, 2008 8:20 AM
> To: SIGMETRICS at LISTSERV.UTK.EDU
> Subject: Re: [SIGMETRICS] Citation statistics
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> On Sun, 15 Jun 2008, Loet Leydesdorff wrote:
>
>>> SH: But what all this valuable, valid cautionary discussion  
>>> overlooks
> is not
>>> only the possibility but the *empirically demonstrated fact* that
> there
>>> exist metrics that are highly correlated with human expert rankings.
>>
>> It seems to me that it is difficult to generalize from one setting in
> which
>> human experts and certain ranks coincided to the *existence *of such
>> correlations across the board. Much may depend on how the experts are
>> selected. I did some research in which referee reports did not
> correlate
>> with citation and publication measures.
>
> Much may depend on how the experts are selected, but that was just as
> true during the 20 years in which rankings by experts were the sole
> criterion for the rankings in the UR Research Assessment Exercise
> (RAE). (In validating predictive metrics one must not endeavor to be
> Holier than the Pope: Your predictor can at best hope to be as good  
> as,
> but not better than, your criterion.)
>
> That said: All correlations to date between total departmental author
> citation counts (not journal impact factors!) and RAE peer rankings
> have been positive, sizable, and statistically significant for the
> RAE, in all disciplines and all years tested. Variance there will be,
> always, but a good-sized component from citations alone seems to be
> well-established. Please see the studies of Professor Oppenheim and
> others, for example as cited in:
>
>    Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated
> online
>    RAE CVs Linked to University Eprint Archives: Improving the UK
> Research
>    Assessment Exercise whilst making it cheaper and easier. Ariadne  
> 35.
>    http://www.ariadne.ac.uk/issue35/harnad/
>
>> Human experts are necessarily selected from a population of experts,
> and it
>> is often difficult to delineate between fields of expertise.
>
> Correct. And the RAE rankings are done separately, discipline by
> discipline; the validation of the metrics should be done that way too.
>
> Perhaps there is sometimes a case for separate rankings even at
> sub-disciplinary level. I expect the departments will be able to sort
> that out. (And note that the RAE correlations do not constitute a
> validation of metrics for evaluating individuals: I am confident that
> that too will be possible, but it will require many more metrics and
> much more validation.)
>
>> Similarly, we
>> know from quite some research that citation and publication practices
> are
>> field-specific and that fields are not so easy to delineate. Results
> may be
>> very sensitive to choices made, for example, in terms of citation
> windows.
>
> As noted, some of the variance in peer judgments will depend on the
> sample of peers chosen; that is unavoidable. That is also why "light
> touch" peer re-validation, spot-checks, updates and optimizations on  
> the
> initialized metric weights are also a good idea, across the years.
>
> As to the need to evaluate sub-disciplines independently: that  
> question
> exceeds the scope of metrics and metric validation.
>
>> Thus, I am bit doubtful about your claims of an "empirically
> demonstrated
>> fact."
>
> Within the scope mentioned -- the RAE peer rankings, for disciplines
> such as they have been partitioned for the past two decades -- there  
> is
> ample grounds for confidence in the empirical results to date.
>
> (And please note that this has nothing to do with journal impact
> factors,
> journal field classification, or journal rankings. It is about the RAE
> and the ranking of university departments by peer panels, as  
> correlated
> with citation counts.)
>
> Stevan Harnad
> AMERICAN SCIENTIST OPEN ACCESS FORUM:
> http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-For
> um.html
>     http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/
>
> UNIVERSITIES and RESEARCH FUNDERS:
> If you have adopted or plan to adopt a policy of providing Open Access
> to your own research article output, please describe your policy at:
>     http://www.eprints.org/signup/sign.php
>     http://openaccess.eprints.org/index.php?/archives/71-guid.html
>     http://openaccess.eprints.org/index.php?/archives/136-guid.html
>
> OPEN-ACCESS-PROVISION POLICY:
>     BOAI-1 ("Green"): Publish your article in a suitable toll-access
> journal
>     http://romeo.eprints.org/
> OR
>     BOAI-2 ("Gold"): Publish your article in an open-access journal
> if/when
>     a suitable one exists.
>     http://www.doaj.org/
> AND
>     in BOTH cases self-archive a supplementary version of your article
>     in your own institutional repository.
>     http://www.eprints.org/self-faq/
>     http://archives.eprints.org/
>     http://openaccess.eprints.org/



More information about the SIGMETRICS mailing list