Studies showing that review articles get more citations

Jean-Claude Guédon jean.claude.guedon at UMONTREAL.CA
Tue Feb 23 14:02:15 EST 2010


It is a statement often heard among editors that if you want to raise
the impact factor of a journal, you increase the number of review
articles.

The discussion here is about impact, and not impact factor. However, in
"Impact factors: Use and Abuse" by Amin and Mabe published in the 1st
issue of "Publishing Perspectives" in 2000 and reissued in 2007 includes
an interesting curve that may explain both sides of the issue, impact as
well as impact factor.

     1. The fig. 3 called "Impact factors and journal type" shows that
        reviews see their impact move much faster and higher than
        letters and full papers. This means that, within the two-year
        window of the impact-factor, review articles will collect more
        citations than other ypes of articles and this will translate
        favourably on the ranking of the journal;
     2. Because the curve of the reviews is much higher than that of
        leters or full papers, the area beneath the curve is also much
        greater, which means that impact is also higher in this case.

Alas the authors do not give a source for their graph and one is left
with a question: is this a graph to explain a phenomenon that still
needs to be documented, or does it really reflect some empirical
measurement? References are alluded to a 4,000 journal study, but
details are few.

The article in question comes from an Elsevier publication and may have
to be taken with a grain of salt. Here is the URL:

www.elsevier.com/framework_editors/pdfs/Perspectives1.pdf 

On a different, but related, topic, viz. the errors attached to impact
factor measurements, Amin and Mabe do make an interesting comment when
they say that journals whose impact factors differ by less than 25%
should be lumped together in the same category (p. 5). Also impact of
journals go up and down by as much as 40% for smaller journals, and 15%
for journals that produce greater numbers of articles (more than 150 per
year). Observed and random fluctuation, furthermore, are very similar
(fig. 4b). This points to rather large errors attached to these
measurements. Perhaps impact factors should be expressed with one or
perhaps two significant figures... 


Le mardi 23 février 2010 à 09:06 -0500, Stevan Harnad a écrit :
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> 
> On Tue, Feb 23, 2010 at 8:51 AM, Tom Wilson <wilsontd at gmail.com> wrote:
> 
> >  Is it really worth exploring?
> >
> >  I'd have thought it self-evident that, if you are looking for a review of
> > the literature, as most authors are, you'll site existing reviews; similarly
> > with methodology - if you are using a particular theoretical perspective
> > you'll want to cite others as confirmation that you are on the right track.
> >  One of the problems of bibliometrics appears to be a stunning facility for
> > determining the obvious :-)
> 
> It is obvious that reviews will cite reviews, and that authors will
> cite supporting studies, but is it obvious that reviews are cited more
> than ordinary articles? Perhaps; but it would still be nice to see the
> evidence. Especially nice to see the evidence for review *articles* --
> relative to ordinary articles -- separated from the evidence for
> review *journals* relative to ordinary journals.
> 
> There has also been some evidence that articles that cite more
> references get more citations. Review articles usually cite more
> references than ordinary articles (indeed, that is one of the criteria
> ISI uses for classifying articles as reviews!). It would be nice to
> partial out the respective contributions of these factors too (along,
> of course, with self-citations, co-author citations, citation circles,
> etc.).
> 
> The outcomes may continue to be confirming the obvious, but it will
> still be nice to have the objective data at hand... :-)
> 
> Stevan Harnad
> 
> > Tom Wilson
> >
> > On 23 February 2010 12:23, Jacques Wainer <wainer at ic.unicamp.br> wrote:
> >>
> >> Adminstrative info for SIGMETRICS (for example unsubscribe):
> >> http://web.utk.edu/~gwhitney/sigmetrics.html
> >>
> >> I used:
> >>
> >> @Article{reviewpap1,
> >>  author =       {Aksnes, D. W.},
> >>  title =        {Citation rates and perceptions of scientific
> >> contribution},
> >>  journal =      {Journal of the American Society for Information Science
> >> and Technology},
> >>  year =         2006,
> >>  key =          2,
> >>  volume =       57,
> >>  pages =        {169-185},
> >> doi = {10.1002/asi.20262}}
> >>
> >>
> >> @Article{reviewpap3,
> >>  author =       {H. P. F. Peters  and  A. F. J. van Raan},
> >>  title =        {On determinants of citation scores: A case study in
> >> chemical engineering},
> >>  journal =      {Journal of the American Society for Information Science},
> >>  year =         1994,
> >>  volume =       45,
> >>  number =       1,
> >>  pages =        {39 - 49}}
> >>
> >>
> >> as two references to the phenomenon. In this line, does anyone know
> >> of studies that point out that METHODOLOGICAL papers are also cited more
> >> than other research?
> >>
> >> Thanks
> >>
> >> Jacques Wainer
> >
> >
> >
> > --
> > ----------------------------------------------------------
> > Professor Tom Wilson, PhD, PhD (h.c.),
> > -----------------------------------------------------------
> > Publisher and Editor in Chief: Information Research: an international
> > electronic journal
> > Website - http://InformationR.net/ir/
> > Blog - http://info-research.blogspot.com/
> > Photoblog - http://tomwilson.shutterchance.com/
> > -----------------------------------------------------------
> > E-mail: wilsontd at gmail.com
> > -----------------------------------------------------------
> >



More information about the SIGMETRICS mailing list