CWTS Journal Indicators

Moed, Henk (ELS-AMS) H.Moed at ELSEVIER.COM
Thu Sep 26 07:22:22 EDT 2013



Comment on: New CWTS Indicators

The current work at CWTS on journal metrics is impressive and deserves our full respect; adding stability intervals to SNIP values is a great step forward. SNIP (Source Normalized Impact per Paper) was launched in 2010, and is freely available via the CWTS website and via Scopus. Development of sophisticated indicators is most valuable from a methodological viewpoint, and shows to the outside world that developers are experts and know what they are doing. But I fully agree that one should also have an eye for the wider community: its information needs, level of expertise, and the manner and context in which it uses the indicators. We are all aware that journal metrics is used not only for journal evaluation but also for assessment of the research performance of individual researchers.

Please allow me, as the inventor of the SNIP concept, to briefly explain its background and strategic objective.  The concept is largely based on notions Eugene Garfield put forward in his early papers and his later papers. Its development started from the assumption that the "official" journal impact factor (JIF) has become far too dominant, and has even obtained an absolute status, not only because it is seen as the only valid and usable journal metric, but also because it is more and more being used for purposes it has never been designed for, such as assessment of individuals.

Development of a new metric should most of all abandon the idea of creating "the final", perfect, measure, simply because such a metric does not exist, in view of the multi-dimensionality of concepts such as journal quality or research quality.  The aim of the new metric was not to replace the JIF and thus obtain a status as absolute as the old metric, but rather to illustrate the relativity of the old metric, and of journal metrics in general.  This relativity was illustrated by showing how the numerical value of the JIF largely depends upon "extrinsic" factors that have little to do with journal or research quality as such.

The SNIP aims to highlight two such factors. The most important one is the frequency at which researchers cite in their papers other, recently published articles. In some subject fields reference lists are long, and most of the cited words are 0-5 years old; in other fields reference lists are short, and cited references cover a wide range of publication years. In the first group of fields, citation levels and journal impact factors tend to be much higher than in the second. A second factor is the extent to which the database in which the calculations take place, cover a subject field.  In subject fields that are not well covered, citation levels and impact factors tend to be lower.

A journal's SNIP, then, is defined as its JIF (more precisely, a measure very similar to JIF, I leave out the details here), divided by a measure indicating the citation potential of a journal's subject field.  In fields with high citation levels and/or good database coverage, the citation potential is above 1. As a result, compared to JIF, the SNIP value lowers. Similarly, in fields with low citation frequencies or moderate database coverage, SNIP goes up compared to JIF. In fact, according to the original SNIP concept, for 50 % of journals SNIP goes down, and for another 50% it goes up.  In this manner any user can see as it were directly how the JIF - and the journal rankings based on it - change if one takes into account the two factors outlined above.

The new version of SNIP proposed by CWTS is more sophisticated than the original one.  I will not discuss the technicalities here, even though I am inclined to question the statistical validity of some its new features.  But much more important in this context is in my view that although the new version SNIP has lost its direct link to the journal impact factor in the manner I outlined above, it can still be interpreted and explained as a corrected journal impact factor, based on plausible considerations that are widely understood and supported in the user community. It clearly illustrates JIF's relativity: how JIF values change if one eliminates two extrinsic factors, and how rankings based on them actually change significantly not only across but also within scientific disciplines. Newly developed features at CWTS, i.c., accounting for other factors such as outliers, and providing upper and lower bounds to SNIP, are most important steps forward, especially if their effects are linked directly to the standard journal impact factors in a way the user community can easily understand.

Henk F. Moed
Senior Scientific Advisor and Director, Informetric Research Group
Elsevier, Amsterdam, The Netherlands


From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: 26 September 2013 11:43
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] CWTS Journal Indicators

> This is now the forth proposition for such an indicator, after the Journal Impact Factor, the Scimago indicator, and the Eigenfactor. In this context, allowing both practitioners and users to decide which one seems to have the greatest scientific merit is essential.
Dear Eric,
If one has a parameter space, the number of options growths rapidly. Without wishing doubt for a moment the good intentions of the developers of the indicators, it seems to me that to leave it to users to make the choices is a poor option.
The IF is transparent; it is a kind of two-year moving average and everybody can reproduce it after collecting the data. The Eigenfactor, however, looks like the value on the first eigenvector (eigenvector centrality), but it is not precisely that. It seems irreproduceable to me for a user or other members of the community. The SJR is derived from PageRank but with modification so that nobody is able to reproduce it using standard software. SNIP is hometrade of CWTS/Elsevier. See my recent letter in JoI about the "revised SNIP". This is becoming fully intransparent while there are clear network measures such as indegree, PageRank, etc. [For an overview, see: How are new citation-based journal indicators adding to the bibliometric toolbox?<http://www.leydesdorff.net/journal_indicators/index.htm>, Journal of the American Society for Information Science and Technology 60(7) (2009) 1327-1336; <pdf-version<http://www.leydesdorff.net/journal_indicators/journal%20indicators.pdf>>].
In my opinion, one should avoid central tendency statistics in the case of these highly-skewed distributions. Lutz and I argued for integration of citation curves after normalization in terms of percentiles (Integrated Impact Indicator, I3). SNIP elaborates on fractional counting and thus helps to solve normalization issues. SJR is non-parametric. These are advantages. A user may be inclined to resort to using the h-index (in addition to JIF) because of its simplicity.
Best,
Loet

On Wed, Sep 25, 2013 at 6:18 PM, Éric Archambault <eric.archambault at science-metrix.com<mailto:eric.archambault at science-metrix.com>> wrote:
Dear Nees Jan

Thank you for this addition to the growing list of journal indicators. Having a publicly accessible list of scores like this is really important and will play an important role in the debate on journal impact. Having rigorous researchers such as the ones at CWTS pursuing this project initiated by Michel Zitt and Henry Small and pursued by Moed is certainly useful.

However, I feel this is still at the stage of a research project and we should be careful to characterize our indicators carefully before telling the wider community that they are ready for prime time. We can't afford to have any more flaky journal impact indicators. This is now the forth proposition for such an indicator, after the Journal Impact Factor, the Scimago indicator, and the Eigenfactor. In this context, allowing both practitioners and users to decide which one seems to have the greatest scientific merit is essential. This requires that the methods and all ingredients be known to users and practitioners. Your paper is useful to understand the recipe but some ingredients are missing from the public disclosure and these need to be made public to help the community characterize your tool.

In particular, I think a few more details on the methods would be useful here. Firstly, having more details about the bootsrapping method that you use to compute the stability intervals would be welcome. Have you written a paper on this technique? Secondly, an additional column with the field of each journal would be more transparent and useful to users.

Do you have an explanation for this behavior:
The Journal of Engineering Education is one of the source journals with the highest SNIP in 2008. Is this an artifact or in 2008 the journal became that good compared to 2007? I find the jump surprising as it is outside the boundaries that you calculated. Of course, it is not impossible by chance to fall outside these, just that the jump is somewhat large.

Kind regards

Eric

Source title

Source type

Print ISSN

Year

P

SNIP

SNIP (lower bound)

SNIP (upper bound)

% self cit

Journal of Engineering Education

Journal

1069-4730

2002

207

7.901979

6.625

9.355

16%

Journal of Engineering Education

Journal

1069-4730

2003

215

6.587213

5.393

7.838

11%

Journal of Engineering Education

Journal

1069-4730

2004

194

9.710727

7.719

11.838

9%

Journal of Engineering Education

Journal

1069-4730

2005

133

2.498504

1.685

3.463

48%

Journal of Engineering Education

Journal

1069-4730

2006

111

4.458215

3.042

6.121

16%

Journal of Engineering Education

Journal

1069-4730

2007

98

6.650165

4.437

9.274

23%

Journal of Engineering Education

Journal

1069-4730

2008

93

20.62702

14.286

28.396

10%

Journal of Engineering Education

Journal

1069-4730

2009

75

15.92148

12.191

20.305

16%

Journal of Engineering Education

Journal

1069-4730

2010

77

16.12523

12.181

20.454

14%

Journal of Engineering Education

Journal

1069-4730

2011

76

16.1012

11.783

21.15

14%

Journal of Engineering Education

Journal

1069-4730

2012

90

12.49939

9.933

15.098

7%



From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>] On Behalf Of Nees Jan van Eck
Sent: September-25-13 10:09 AM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: [SIGMETRICS] CWTS Journal Indicators

The 2012 SNIP values have been released on CWTS Journal Indicators (www.journalindicators.com<http://www.journalindicators.com>). SNIP (source normalized impact per paper) is a freely available journal impact indicator that uses a source normalization mechanism to correct for differences in citation practices between fields of science. Compared with the journal impact factor, SNIP allows for more accurate comparisons between journals active in different scientific fields. SNIP is calculated by CWTS based on Elsevier's Scopus database. With the release of the 2012 SNIP values, stability intervals have been added to CWTS Journal Indicators. These intervals indicate the reliability of the SNIP value of a journal. For instance, if a journal's SNIP value is largely due to a single very highly cited publication, this is indicated by a wide stability interval. SNIP is the only freely available journal impact indicator that is presented with stability intervals.

Your feedback on CWTS Journal Indicators is greatly appreciated.

Best regards,
Nees Jan van Eck

========================================================
Nees Jan van Eck PhD
Researcher
Head of ICT

Centre for Science and Technology Studies
Leiden University
P.O. Box 905
2300 AX Leiden
The Netherlands

Willem Einthoven Building, Room B5-35
Tel:       +31 (0)71 527 6445<tel:%2B31%20%280%2971%20527%206445>
Fax:       +31 (0)71 527 3911<tel:%2B31%20%280%2971%20527%203911>
E-mail:    ecknjpvan at cwts.leidenuniv.nl<mailto:ecknjpvan at cwts.leidenuniv.nl>
Homepage:  www.neesjanvaneck.nl<http://www.neesjanvaneck.nl/>
VOSviewer: www.vosviewer.com<http://www.vosviewer.com/>
========================================================




--
Professor, University of Amsterdam
Amsterdam School of Communications Research (ASCoR)
Honorary Professor, SPRU, <http://www.sussex.ac.uk/spru/> University of Sussex; Visiting Professor, ISTIC, <http://www.istic.ac.cn/Eng/brief_en.html> Beijing;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

________________________________

Elsevier B.V. Registered Office: Radarweg 29, 1043 NX Amsterdam, The Netherlands, Registration No. 33156677, Registered in The Netherlands.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130926/57b5f427/attachment.html>


More information about the SIGMETRICS mailing list