identification of review articles
Ludo Waltman
ludo at LUDOWALTMAN.NL
Fri May 7 04:36:58 EDT 2010
Dear Linda,
Some time ago I performed an (unpublished) analysis of the accuracy of
the identification of reviews in Web of Science. At least in some
fields (subject categories) the distinction between ordinary articles
and reviews is inaccurate. Consider the field of management. The
attached Figure 1 shows the distribution of publications in the field
of management based on their number of references. Ordinary articles
are indicated in blue and reviews in red. As can be seen, in the field
of management almost all publications with more than 100 references
are classified as reviews, while almost no publications with less than
100 references are classified as reviews. It is of course extremely
unlikely that this is a correct classification. One would expect the
proportion of reviews to be a gradually increasing function of the
number of references. Instead, the figure shows a sudden increase at
100 references. Similar observations can be made for other fields,
although management seems to be a quite extreme case. Pharmacology &
pharmacy is an example of a field with a much more gradually
increasing proportion of reviews (see the attached Figure 2). So in
this field the distinction between ordinary articles and reviews may
be more accurate.
Best regards,
Ludo Waltman
========================================================
Ludo Waltman MSc
Researcher
Centre for Science and Technology Studies
Leiden University
P.O. Box 905
2300 AX Leiden
The Netherlands
Willem Einthoven Building, Room B5-35
Tel: +31 (0)71 527 5806
Fax: +31 (0)71 527 3911
E-mail: waltmanlr at cwts.leidenuniv.nl
Homepage: www.ludowaltman.nl
========================================================
Quoting Linda Butler <linda.butler at ANU.EDU.AU>:
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
>
> I'm hoping someone on the list may be able to help with this query ...
>
> Until now, I have often used separate field-normalised benchmarks
> for articles and reviews. However some recent work I have
> undertaken has made me question the wisdom of this. My
> understanding is that Scopus and WoS both classify a publication as
> a 'review' if it contains more than 100 references. I hadn't thought
> too closely about this methodology until I recently came across
> some articles that both Scopus and WoS have classified as reviews,
> but which appear to be standard research articles (though with lots
> of references). I'm now beginning to wonder whether I should
> continue to used separate benchmarks for articles and reviews. If
> it is only one or two papers that crop up in a macro level
> analysis, then I won't be too concerned. But if there is a
> question mark over the accuracy of this method for identifying
> reviews, and the problem is more common than, then I will need to
> rethink my methodology.
>
> Does anyone know of any empirical studies that have examined the
> accuracy of this method for classifying a publication as a review?
>
> Or even if you don't know of any studies, have you come across
> similar concerns in any analyses you have undertaken?
>
> with thanks
> Linda Butler
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Figure1.png
Type: image/x-png
Size: 23576 bytes
Desc: not available
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20100507/d42c2a0f/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Figure2.png
Type: image/x-png
Size: 24081 bytes
Desc: not available
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20100507/d42c2a0f/attachment-0001.bin>
More information about the SIGMETRICS
mailing list