CWTS Journal Indicators
Nees Jan van Eck
info at VOSVIEWER.COM
Mon Sep 30 03:37:35 EDT 2013
Dear colleagues,
Thank you all for your feedback on CWTS Journal Indicators.
Let me take this opportunity to respond to some of your remarks:
Loet Leydesdorff: Yes, a disadvantage of SNIP and other new journal impact
measures is their limited transparency. The calculation of SNIP and other
measures is documented in scientific papers and in that sense is quite
transparent, but in practice the calculations are indeed very difficult to
replicate. This is mainly an issue of data access. If at some point
large-scale access to bibliographic databases becomes more widely available,
replication would become more feasible. A development in this direction
would definitely be desirable. At the same time, things are not so much
different with the journal impact factor. Replicating this measure also
requires large-scale data access and also is not always straightforward.
Large-scale data access is also required for calculating and replicating
your own recently proposed measures.
Wilfred Mijnhardt: I am not sure if I understand your suggestion correctly.
Journals that are in the highest decile of their field, either based on Web
of Science/journal impact factor or based on Scopus/SNIP need not have
reliable values. CWTS Journal Indicators for instance shows a number of
journals with a very high SNIP value that at the same time also have a wide
stability interval, which indicates a relatively low reliability.
Henk Moed: Thank you for your valuable remarks on the origin of the SNIP
concept. Your remarks offer a clear description of the relationship between
the original SNIP indicator you developed and the revised SNIP indicator
currently produced by CWTS.
Yves Gingras: In some cases, I believe that working with moving averages
would make a lot of sense. However, this is something which mainly depends
on the size of journals. For a very large journal such as PLoS ONE, yearly
statistics are definitely meaningful and there is no need to work with
moving averages. For small journals, the value of yearly statistics is more
questionable. CWTS Journal Indicators offers stability intervals exactly to
deal with this difficulty. These stability intervals provide the information
necessary to determine whether yearly fluctuations are likely to be due to
chance or whether they are really meaningful. By default, CWTS Journal
Indicators also does not report statistics for journals with fewer than 50
publications. This is another way in which we try to prevent
overinterpretation of the values we provide. At the same time, I believe
that as bibliometricians we should in general not omit certain types of
information because they could potentially be misinterpreted or misused. We
should put a lot of effort into properly informing and educating the end
users of our statistics, but the final responsibility on the use of our
statistics should be with the end user.
Sylvan Katz: At CWTS, we didnt investigate in detail the properties of the
citation distributions underlying the SNIP calculation. In general, however,
citation distributions do not exactly follow a power law (I am assuming that
this is what you mean by a scaling distribution), although their tail may
have power law properties, at least in an approximate sense. Given the
skewed nature of citation distributions, I believe that an important next
step in the development of journal impact measures could be to develop
properly normalized measures that do not rely on averages (as is also
suggested by Loet). This is for instance the approach that is taken in the
CWTS Leiden Ranking, where by default universities are ranked based on the
PP(top 10%) indicator. However, when average-based measures such as SNIP are
complemented with stability intervals, I believe that this offers a
sufficiently robust approach to deal with the skewed nature of citation
distributions.
Best regards,
Nees Jan
-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Sylvan Katz
Sent: Friday, September 27, 2013 12:41 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] CWTS Journal Indicators
http://web.utk.edu/~gwhitney/sigmetrics.html
RIP/SNIP Indicator Question
Does any one know if the distribution of citations in the current year
to papers in the three preceding years is a scaling distribution?
Sylvan Katz
Visiting Research Fellow
http://www.sussex.ac.uk/Users/sylvank/
On 9/26/2013 12:15 PM, Yves Gingras wrote:
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html Re: [SIGMETRICS] CWTS
> Journal Indicators
> Hello all
>
> In thinking about publishing _every year_ an indicator we clearly know
> will be used for evaluation purposes and thus can (and will) have
> conrete effects on rersearchers, one should also think about the meaning
> of having such an indicator calculated _every year_ as if a change in a
> single year made sense and could be usful for evaluations. It would be
> much more meaningful to calculate a moving average based on 2 or 3 years
> (for example). There is a real danger about trying to interpret a
> variation over a single year as it may be subject to meaningless
> stochastic variations and then be used by administrators as if it meant
> something real. Such an interpreation can in turn generate decisions
> that are not rational.
>
> One can undersand that a business (like THES of QS for example) want to
> be visibile every year with a product ranking institutions, for
> example, but as scholars we should make clear that annual variations in
> a complex system like science are rarely meaningful.
>
> Publishing and announcing a simple yearly value for an indicator can
> thus generate perverse effects and contribute to the fever of
> evaluation by administrators of research.
>
> Best regards
>
> Yves Gingras
>
>
> Le 25/09/13 12:18, « Éric Archambault »
> <eric.archambault at SCIENCE-METRIX.COM> a écrit :
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> Dear Nees Jan
>
> Thank you for this addition to the growing list of journal
> indicators. Having a publicly accessible list of scores like this is
> really important and will play an important role in the debate on
> journal impact. Having rigorous researchers such as the ones at CWTS
> pursuing this project initiated by Michel Zitt and Henry Small and
> pursued by Moed is certainly useful.
>
> However, I feel this is still at the stage of a research project and
> we should be careful to characterize our indicators carefully before
> telling the wider community that they are ready for prime time. We
> cant afford to have any more flaky journal impact indicators. This
> is now the forth proposition for such an indicator, after the
> Journal Impact Factor, the Scimago indicator, and the Eigenfactor.
> In this context, allowing both practitioners and users to decide
> which one seems to have the greatest scientific merit is essential.
> This requires that the methods and all ingredients be known to users
> and practitioners. Your paper is useful to understand the recipe but
> some ingredients are missing from the public disclosure and these
> need to be made public to help the community characterize your tool.
>
> In particular, I think a few more details on the methods would be
> useful here. Firstly, having more details about the bootsrapping
> method that you use to compute the stability intervals would be
> welcome. Have you written a paper on this technique? Secondly, an
> additional column with the field of each journal would be more
> transparent and useful to users.
>
> Do you have an explanation for this behavior:
> The Journal of Engineering Education is one of the source journals
> with the highest SNIP in 2008. Is this an artifact or in 2008 the
> journal became that good compared to 2007? I find the jump
> surprising as it is outside the boundaries that you calculated. Of
> course, it is not impossible by chance to fall outside these, just
> that the jump is somewhat large.
>
> Kind regards
>
> Eric
>
> Source title Source type Print ISSN Year P SNIP SNIP (lower bound)
> SNIP (upper bound) % self cit
> Journal of Engineering Education Journal 1069-4730 2002 207 7.901979
> 6.625 9.355 16%
> Journal of Engineering Education Journal 1069-4730 2003 215 6.587213
> 5.393 7.838 11%
> Journal of Engineering Education Journal 1069-4730 2004 194 9.710727
> 7.719 11.838 9%
> Journal of Engineering Education Journal 1069-4730 2005 133 2.498504
> 1.685 3.463 48%
> Journal of Engineering Education Journal 1069-4730 2006 111 4.458215
> 3.042 6.121 16%
> Journal of Engineering Education Journal 1069-4730 2007 98 6.650165
> 4.437 9.274 23%
> Journal of Engineering Education Journal 1069-4730 2008 93 20.62702
> 14.286 28.396 10%
> Journal of Engineering Education Journal 1069-4730 2009 75 15.92148
> 12.191 20.305 16%
> Journal of Engineering Education Journal 1069-4730 2010 77 16.12523
> 12.181 20.454 14%
> Journal of Engineering Education Journal 1069-4730 2011 76 16.1012
> 11.783 21.15 14%
> Journal of Engineering Education Journal 1069-4730 2012 90 12.49939
> 9.933 15.098 7%
>
>
> *From:* ASIS&T Special Interest Group on Metrics
> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] *On Behalf Of *Nees Jan van Eck
> *Sent:* September-25-13 10:09 AM
> *To:* SIGMETRICS at LISTSERV.UTK.EDU
> *Subject:* [SIGMETRICS] CWTS Journal Indicators
>
> Adminstrative info for SIGMETRICS (for example unsubscribe):
> http://web.utk.edu/~gwhitney/sigmetrics.html
> The 2012 SNIP values have been released on CWTS Journal Indicators
> (www.journalindicators.com <http://www.journalindicators.com>
> <http://www.journalindicators.com> ). SNIP (source normalized impact
> per paper) is a freely available journal impact indicator that uses
> a source normalization mechanism to correct for differences in
> citation practices between fields of science. Compared with the
> journal impact factor, SNIP allows for more accurate comparisons
> between journals active in different scientific fields. SNIP is
> calculated by CWTS based on Elseviers Scopus database. With the
> release of the 2012 SNIP values, stability intervals have been added
> to CWTS Journal Indicators. These intervals indicate the reliability
> of the SNIP value of a journal. For instance, if a journals SNIP
> value is largely due to a single very highly cited publication, this
> is indicated by a wide stability interval. SNIP is the only freely
> available journal impact indicator that is presented with stability
> intervals.
>
> Your feedback on CWTS Journal Indicators is greatly appreciated.
>
> Best regards,
> Nees Jan van Eck
>
> ========================================================
> Nees Jan van Eck PhD
> Researcher
> Head of ICT
>
> Centre for Science and Technology Studies
> Leiden University
> P.O. Box 905
> 2300 AX Leiden
> The Netherlands
>
> Willem Einthoven Building, Room B5-35
> Tel: +31 (0)71 527 6445
> Fax: +31 (0)71 527 3911
> E-mail: ecknjpvan at cwts.leidenuniv.nl
> <mailto:ecknjpvan at cwts.leidenuniv.nl>
> <mailto:ecknjpvan at cwts.leidenuniv.nl>
> Homepage: www.neesjanvaneck.nl <http://www.neesjanvaneck.nl/>
> <http://www.neesjanvaneck.nl/>
> VOSviewer: www.vosviewer.com <http://www.vosviewer.com/>
> <http://www.vosviewer.com/>
> ========================================================
>
>
>
>
> Yves Gingras
>
> Professeur
> Département d'histoire
> Centre interuniversitaire de recherche
> sur la science et la technologie (CIRST)
> Chaire de recherche du Canada en histoire
> et sociologie des sciences
> Observatoire des sciences et des technologies (OST)
> UQAM
> C.P. 8888, Succ. Centre-Ville
> Montréal, Québec
> Canada, H3C 3P8
>
> Tel: (514)-987-3000-7053
> Fax: (514)-987-7726
>
> http://www.chss.uqam.ca
> http://www.cirst.uqam.ca
> http://www.ost.uqam.ca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130930/aaac8d90/attachment.html>
More information about the SIGMETRICS
mailing list