Open Access Metrics: Use REF2014 to Validate Metrics for REF2020

Stephen J Bensman notsjb at LSU.EDU
Thu Dec 18 20:02:45 EST 2014


It is resurrection of the JIF, which in many ways is obsolete, but it is a way of standardizing measures.  It captures all aspects of scientific value--particularly the importance of the review function, etc.  Measuring citations to individual papers is not good, because of subfield variation, which this does correct for by using a scale-standardized journal measure.  The power-law distribution will hold  for any field with a review function.  A department may concentrate in a low-cite subfield, which this may be able to counteract.  Anyhow it is a thought and--in my opinion--a good one.  It will show how departments line up.  You can tell the importance of a person by his number of review articles.  Nobelists tend to dominate these.

Stephen J. Bensman

_______________________________________
From: ASIS&T Special Interest Group on Metrics <SIGMETRICS at LISTSERV.UTK.EDU> on behalf of Mark C. Wilson <mc.wilson at AUCKLAND.AC.NZ>
Sent: Thursday, December 18, 2014 2:45 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] Open Access Metrics: Use REF2014 to Validate Metrics for REF2020


Hi

Unless I misunderstood that, you are proposing comparing "academic units” by where their publications lie on the JIF distribution. Surely it would be better to see where the actual papers lie on the individual paper citation distribution for each field. Hasn’t JIF been sufficiently discredited for measuring individual papers and researchers, e.g. by Brembs/Munafo:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3690355/ ?

Even aggregating authors into departments would produce much less reliable results than looking at the citations of the papers themselves, I guess. Is there perhaps a data collection problem, that led you to propose what I think you did?

Dr Mark C. Wilson
Department of Computer Science, University of Auckland   |      www.cs.auckland.ac.nz/~mcw/blog/
Director, Centre for Mathematical Social Sciences: cmss.auckland.ac.nz  |       Managing Editor, OJAC: analytic-combinatorics.org
Please don't send me Microsoft Office attachments       |               I'm boycotting Elsevier - see thecostofknowledge.com


> On 19/12/2014, at 9:21, Stephen J Bensman <notsjb at LSU.EDU> wrote:
>
>> Just for the hell of it, I would like to propose a method for judging whether one university is doing better than another in a given discipline.  It is based on the power-law model or Lotkaian informetrics and the impact factor.  As you know, Garfield favored the impact factor because it corrected for physical and temporal size and brought to the top the review journals, whose importance lay at the basis of his theory of scientific progress and citation indexing.  However, there is a significant correlation between current citation rate (the impact factor) and total citations, which are heavily influenced by temporal and physical size.  That means that the older, bigger, more prestigious journals—Matthew Effect--tend to have a higher IF.  If you take a JCR subject category—bad as these things are—and graph the distribution of the journals in that category by impact factor, they will form a negative exponential power-law curve.  Then take the publications of the two universities you want to compare.  The university, whose publications concentrate further to the right on the asymptote—particularly at the tip, where the review journals are—is having a greater impact on the discipline than the other one is.  You could even divide the asymptote into deciles for metric purposes.  Simple, visible, and easily understood.  Citations correlate very well with peer ratings—the higher the citations from documents with more citations themselves, the greater the correlations, as was proven by Narin and Page even at the semantic level.
>>
>> Respectfully,
>>
>> Stephen J Bensman, Ph.D.
>> LSU Libraries
>> Lousiana State University
>> Baton Rouge, LA 70803
>>
>>



More information about the SIGMETRICS mailing list