[OACI Working Group] Let's dumb-up (journal citation) impact factors

jcg jean.claude.guedon at UMONTREAL.CA
Sat Oct 23 10:34:25 EDT 2004


This is an excellent and informative comment, Stevan. Thank you.

Jean-Claude


On Sat October 23 2004 07:50 am, Stevan Harnad wrote:
> The following is a commentary on an editorial in the British Medical
> Journal entitled:
>
>     Let's dump impact factors
>     Kamran Abbasi, acting editor
>     BMJ  2004;329 (16 October), doi:10.1136/bmj.329.7471.0-h
>     http://bmj.bmjjournals.com/cgi/content/full/329/7471/0-h
>
> I've submitted the following commentary. It should appear Monday
> at:
>
>     http://bmj.bmjjournals.com/cgi/eletters/329/7471/0-h
>
> Prior Amsci Topic Thread:
>
>     "Citation and Rejection Statistics for Eprints and Ejournals"
>     http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0138.html
>     http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1112.html
>
> ----------------------------------------------------------------------
>
>         Enrich Impact Measures Through Open Access Analysis
>               (Or: "Let's Dumb-Up Impact Factors")
>
>                       Stevan Harnad
>
> The "journal impact factor" -- the average number of citations
> received by the articles in a journal -- is not an *invalid* instrument,
> but a blunt (and obsolescent) one. It does have some meaning and some
> predictive value (i.e., it is not merely a circular "definition" of
> impact), but we can do far better. Research impact evaluation should
> be thought of in multiple-regression terms: The journal impact factor
> is just one of many potential predictive factors, each with its own
> weight, and each adding a certain amount to the accuracy of the
> prediction/evaluation.
>
> The journal impact factor is the first of the regression weights, but
> not because it is the biggest or strongest, but just because it came
> first in time: Gene Garfield (1955, 1999) and the Institute for
> Scientific Information (ISI) started to count citations (and citation
> immediacy, and other data) and produced an index of the average
> (2-year) citation counts of journals -- as well as the individual
> citation counts of articles and authors.
>
> The fact that unenterprising and unreflecting evaluation committees
> found it easier to simply weight their researchers' publication counts
> with the impact factors of the journals in which they appeared was due
> in equal parts to laziness and to the valid observation that journal
> impact factors *do* correlate, even if weakly, with journals' rejection
> rates, hence with the rigour of their peer review, and hence with the
> quality of their contents:
>
>     "High citation rates... and low manuscript acceptance rates...
>     appear to be predictive of higher methodological quality scores for
>     journal articles" (Lee et al. 2002)
>
>     "The majority of the manuscripts that were rejected... were eventually
>     published... in specialty journals with lower impact factor..." (Ray
>     et al. 2000)
>
>     "perceived quality ratings of the journals are positively correlated
>     with citation impact factors... and negatively correlated with
>     acceptance rate." (Donohue & Fox 2000)
>
>     "There was a high correlation between the rejection rate and the
>     impact factor" (Yamasaki 1995)
>
> But even then, the article and author exact citation counts could have
> been added to the regression equation -- yet only lately are
> evaluation committees beginning to do this. Why? Again, laziness and
> unenterprisingness, but also effort and cost: An institution needs to
> be subscribed to ISI's citation databases and needs to take the
> trouble to consult them systematically.
>
> But other measures -- richer and more diverse ones -- are developing,
> and with them the possibility of ever more powerful, accurate and
> equitable assessment and prediction of research performance and impact
> (Harnad et al. 2004). These measures (e.g. citebase
> http://citebase.eprints.org/) include: citation counts for article,
> author, and journal; download counts for article, author and journal;
> co-citation counts (who is jointly cited with whom?); eventually
> co-download counts (what is being downloaded with what?); analogs of
> google's "page-rank" algorithm (recursively weighting citations by the
> weight of the citing work); "hub/authority" analysis (much-cited vs.
> much-citing works); co-text "semantic" analysis (what -- and whose --
> text patterns resemble the cited work?); early-days download/citation
> correlations (http://citebase.eprints.org/analysis/correlation.php)
> (downloads today predict citations citations in two years (Harnad &
> Brody 2004); time-series analyses; and much more.
>
> So the ISI journal-impact factor is merely a tiny dumbed-down portion
> of the rich emerging spectrum of objective impact indicators; it now
> needs to be dumbed-up, not dumped! Two things need to be kept in mind
> in making pronouncements about the use of such performance indicators:
>
>     (i) Consider the alternative! The reason we resort to objective
>     measures at all is that reading and evaluating every single work
>     anew each time it needs to be evaluated is not only subjective
>     but labour-intensive, and requires at least the level of expertise
>     and scrutiny that (one hopes!) the journal peer review itself has
>     accorded the work once already, in a world in which qualified
>     refereeing time is an increasingly scarce, freely-given resource,
>     stolen from researchers' own precious research time. Citations (and
>     downloads) indicate that researchers have found the work in question
>     useful in their own research.
>
>     (ii) The many new forms of impact analysis can now be done
>     automatically, without having to rely on ISI -- if and when
>     researchers make all their journal articles Open Access, by
>     self-archiving them in OAI compliant Eprint Archives on the
>     Web. Remarkable new scientometric engines are just waiting for that
>     open database to be provided in order to add the rich new panoply
>     of impact measures promised above (Harnad et al. 2003).
>
> Donohue JM, Fox JB (2000) A multi-method evaluation of journals in the
> decision and management sciences by US academics. OMEGA-INTERNATIONAL
> JOURNAL OF MANAGEMENT SCIENCE 28 (1): 17-36
>
> Garfield, E., (1955) Citation Indexes for Science: A New Dimension in
> Documentation through Association of Ideas. SCIENCE 122: 108-111
> http://www.garfield.library.upenn.edu/papers/science_v122(3159)p108y1955.ht
>m
>
> Garfield E. (1999) Journal impact factor: a brief review. CMAJ 161(8):
> 979-80. http://www.cmaj.ca/cgi/content/full/161/8/979
>
> Harnad, S. and Brody, T. (2004) Prior evidence that downloads predict
> citations BMJ Rapid Responses, 6 September 2004
> http://bmj.bmjjournals.com/cgi/eletters/329/7465/546#73000
>
> Harnad, S., Brody, T., Vallieres, F., Carr, L., Hitchcock, S.,
> Gingras, Y, Oppenheim, C., Stamerjohanns, H., & Hilf, E. (2004) The
> Access/Impact Problem and the Green and Gold Roads to Open Access.
> SERIALS REVIEW 30. http://www.ecs.soton.ac.uk/~harnad/Temp/impact.html
>
> Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online
> RAE CVs Linked to University Eprint Archives: Improving the UK
> Research Assessment Exercise whilst making it cheaper and easier.
> ARIADNE 35 (April 2003). http://www.ariadne.ac.uk/issue35/harnad/
>
> Lee KP, Schotland M, Bacchetti P, Bero LA (2002) Association of
> journal quality indicators with methodological quality of clinical
> research articles. AMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 287
> (21): 2805-2808
>
> Ray J, Berkwits M, Davidoff, F (2000) The fate of manuscripts rejected
> by a general medical journal. AMERICAN JOURNAL OF MEDICINE 109 (2):
> 131-135.
>
> Yamazaki, S (1995) Refereeing System of 29 Life-Science Journals
> Preferred by Japanese Scientists SCIENTOMETRICS 33 (1): 123-129
>
>
> Visit the List Archives at:
>
> http://mailhost.soros.org/pipermail/oaci-working-group/



More information about the SIGMETRICS mailing list