New Letter to the Editor

Loet Leydesdorff loet at LEYDESDORFF.NET
Mon Mar 30 15:02:59 EDT 2015


Dear Lutz, Christina, and colleagues, 

 

The WoS categories are not part of the solution, but part of the problem.

 

I found the paper in JoI; interesting as always. Thanks!

 

Best,

Loet

 

 

  _____  

Loet Leydesdorff 

Emeritus University of Amsterdam
Amsterdam School of Communications Research (ASCoR)

 <mailto:loet at leydesdorff.net> loet at leydesdorff.net ;
<http://www.leydesdorff.net/> http://www.leydesdorff.net/ 
Honorary Professor,  <http://www.sussex.ac.uk/spru/> SPRU, University of
Sussex; 

Guest Professor  <http://www.zju.edu.cn/english/> Zhejiang Univ., Hangzhou;
Visiting Professor,  <http://www.istic.ac.cn/Eng/brief_en.html> ISTIC,
Beijing;

Visiting Professor,  <http://www.bbk.ac.uk/> Birkbeck, University of London;


 <http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en>
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

 

From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Bornmann, Lutz
Sent: Monday, March 30, 2015 6:22 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] New Letter to the Editor

 


Hi Christina,

 

It is not necessary to lease a large dataset. You can buy just the data for
a single study and this is not very expensive (e.g. at CWTS). You can send
them the UT or DOI and then they add advanced indicators. I propose to buy
the MNCS and percentiles based on WoS subject categories. Then, you can
compare the results based on MNCS and percentiles and you can also calculate
the proportion of papers which are among the top 10% (top 1%) within a
field.

 

I think this is a better way than use solutions which are sub-optimal, but
possible with WoS.

 

Best,

 

Lutz

Von meinem iPad gesendet


Am 30.03.2015 um 16:06 schrieb Pikas, Christina K.
<Christina.Pikas at JHUAPL.EDU>:

Hi All-

This all makes sense to me – I follow the math and I understand the
limitations but
 In practical terms, I need to do an analysis that I can
defend to senior engineers and that will likely get visibility at high
levels.  In previous analyses, I have used % in WoS categories – top
category only but the analysis was intended to get a feel for the
publication venues. My next project is looking at the top (and I need to
define that in a defensible way) publications from an institution probably
since ~1980 but it would be even better to go back to ~1942. The institution
publishes mostly in engineering (electrical, aerospace), but just enough
things in biomedicine to make it obvious some normalization by field is
needed. Level of publications is about 500/year. I have access to WoS and
Scopus but just through the web interface – there’s no budget for leasing a
larger data set.  

 

There’s also the time since publication normalization to be considered. The
immediacy and decay vary by field. In a similar study done in ~1986 for the
same institution (hence the justification for going back to 1980), the
authors considered a paper “frequently cited” if it was cited >150 times in
25 years, 120 citations in 15 years, or more than 50 citations in 5 years.
The authors did not field normalize so the results were mostly from
chemistry.

 

I am probably  - make that definitely – overthinking this but I want to
deliver the highest quality work and I may get to publish this in the
institution’s journal (which is in the 4th quartile in its category L ).

 

Questions are:

1)     Given the weaknesses of the various methods, which do you recommend I
use for field normalization?

2)     Which approach makes sense for time normalization? Maybe if article
is older than cited half-life
  

3)     What should I consider in combining these?  Does order matter? For
processing time but substantively?

 

Thanks!

 

Christina

 

------

Christina K. Pikas

Librarian

The Johns Hopkins University Applied Physics Laboratory

Baltimore: 443.778.4812

D.C.: 240.228.4812

Christina.Pikas at jhuapl.edu

 

And still PhD candidate at Maryland
 

 

From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Jonathan Adams
Sent: Monday, March 30, 2015 8:52 AM
To: SIGMETRICS at listserv.utk.edu
Subject: Re: [SIGMETRICS] New Letter to the Editor

 


I agree with Lutz's general point, that any reasonable methodology will
usually produce similar results especially when applied to a reasonably
large sample of reasonably balanced data. We all recognise that the
underlying driver is that some teams/institutions/countries produce greater
numbers of more frequently cited publications. You have to be perverse for
them' not to 'do well'.

 

The problem that Loet is pointing us towards is that many analysts are
applying methodology towards smaller samples, or less well-balanced data,
and that they are teasing out factors regarding less 'peak' and more
'platform' performance. If they are delivering reports to a client or an
employing organisation then the methodology (and interpretation) they use
may have a significant and not always well-founded influence.

 

Jonathan Adams

Digital Science

 

On 30 March 2015 at 13:27, Bornmann, Lutz <lutz.bornmann at gv.mpg.de> wrote:

Hi Loet,

 

I agree that we have very good alternatives for the MNCS and the use of WoS
categories. However, the alternatives have their own (mostly practical)
weaknesses. Furthermore, it seems that the different normalization methods
produce similar results (see
http://www.sciencedirect.com/science/article/pii/S1751157715000073).

 

Perhaps, other people on this list can report which normalization method
they use (as standard). In my opinion, it would be interesting to know this.

 

Best,

 

Lutz

 

From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff
Sent: Monday, March 30, 2015 9:17 AM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] New Letter to the Editor

 


PS. 

 

Both discussions – the one about using the mean (MNCS) and the one about
using WoS Subject Categories for the normalization – seem now to have
stagnated.

 

1.       Instead of the mean, one should use percentile rank classes. This
was a step in a line of thought in 2010-2011 in which we first criticized
the “old” crown indicator and then proposed what later became labeled by
CWTS as MNCS (Opthof & Leydesdorff, 2010; cf. Lundberg, 2007; Waltman et
al., 2011). We subsequently moved to percentiles, and automated the
“Integrated Impact Indicator” that enables users to define one’s percentile
rank classes at http://www.leydesdorff.net/software/i3 (Leydesdorff &
Bornmann, 2011a). 

 

Another line of thought was source-normalization or fractional counting of
the citations (Zitt & Small, 2008; Moed, 2010; Leydesdorff & Bornmann,
2011b). This was elaborated into the SNIP and then into SNIP2. I mentioned
Mingers (2014) because this development seems to have got stuck now; the
critique does no longer matter?) SJR-2 (Guerrero-Bote et al., 2012), of
course, provides an alternative, but nobody can use this indicator outside
the institute that constructed it.

 

In my opinion, I3 and source-normalization (fractional counting) of the
citations are still good ideas if one does not have WoS in-house through a
license. Perhaps, this is an argument for what you call
“amateur-bibliometrics”. It is better than taking the mean.

 

2.       In principle, SNIP and fractional counting creatively solve the
determination of reference sets. The issue is not “normalization” per se,
but the specification of an expectation (to be used in the denominator). The
institutionalization in Scopus, however, may have been premature; or is
there room to move to SNIP-3, and so forth? (Waltman et al., 2013). SNIP may
be too technical to be reproduced (or controlled) outside the context of its
production.

 

The determination of reference sets in terms of journals may not work or not
be possible (Rafols & Leydesdorff, 2009). The sets are fuzzy and remain
changing. CWTS now moved in the Leiden Rankings 2014 to direct clustering of
the citations, but the 800+ fields can no longer be validated (Ruiz-Castillo
& Waltman, 2015). A disadvantage is that nobody can reproduce the results
outside the institute which constructed these “fields”. We know that
algorithmic constructs do not necessarily match with intellectual
classifications. Furthermore, because the delineation is paper-based
(instead of journal-based), one would have to update continuously. Thus, the
“fields” cannot be reproduced at a next moment of time.

 

If one is not able to specify an expectation, it may be better advised not
to do so nevertheless. Particularly, the specification of uncertain (or
erroneous) expectations in research evaluations may have detrimental effects
(e.g., Rafols et al., 2012). 

 

We know this also from the discussion about using impact factors for the
assessment of individual papers or institutional units across fields. One
easily generates error without the possibility to specify the uncertainty
because the error is not only in the measurement (methodological), but also
in the conceptualization (theoretical).

 

Best,

Loet

 

References:

Guerrero-Bote, V. P., & Moya-Anegón, F. (2012). A further step forward in
measuring journals’ scientific prestige: The SJR2 indicator. Journal of
Informetrics, 6(4), 674-688.

Leydesdorff, L., & Bornmann, L. (2011a). Integrated Impact Indicators (I3)
compared with Impact Factors (IFs): An alternative design with policy
implications. Journal of the American Society for Information Science and
Technology, 62(11), 2133-2146. doi: 10.1002/asi.21609. 

Leydesdorff, L., & Bornmann, L. (2011b). How fractional counting affects the
Impact Factor: Normalization in terms of differences in citation potentials
among fields of science. Journal of the American Society for Information
Science and Technology, 62(2), 217-229. 

Lundberg, J. (2007). Lifting the crown—citation z-score. Journal of
informetrics, 1(2), 145-154. 

Mingers, J. (2014). Problems with SNIP. Journal of Informetrics, 8(4),
890-894. 

Moed, H. F. (2010). Measuring contextual citation impact of scientific
journals. Journal of Informetrics, 4(3), 265-277. 

Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field
normalizations in the CWTS (“Leiden”) evaluations of research performance.
Journal of Informetrics, 4(3), 423-430. 

Rafols, I., & Leydesdorff, L. (2009). Content-based and Algorithmic
Classifications of Journals: Perspectives on the Dynamics of Scientific
Communication and Indexer Effects. Journal of the American Society for
Information Science and Technology, 60(9), 1823-1835. 

Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A.
(2012). How journal rankings can suppress interdisciplinary research: A
comparison between innovation studies and business & management. Research
Policy, 41(7), 1262-1282.

Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact
indicators using algorithmically constructed classification systems of
science. Journal of Informetrics, 9(1), 102-117. 

Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan,
A. F. J. (2011). Towards a New Crown Indicator: Some Theoretical
Considerations. Journal of Informetrics, 5(1), 37-47. 

Waltman, L., van Eck, N. J., van Leeuwen, T. N., & Visser, M. S. (2013).
Some modifications to the SNIP journal impact indicator. Journal of
Informetrics, 7(2), 272-285.

Zitt, M., & Small, H. (2008). Modifying the journal impact factor by
fractional citation weighting: The audience factor. Journal of the American
Society for Information Science and Technology, 59(11), 1856-1860.

 

 


  _____  


Loet Leydesdorff 

Emeritus University of Amsterdam
Amsterdam School of Communications Research (ASCoR)

 <mailto:loet at leydesdorff.net> loet at leydesdorff.net ;
<http://www.leydesdorff.net/> http://www.leydesdorff.net/ 
Honorary Professor,  <http://www.sussex.ac.uk/spru/> SPRU, University of
Sussex; 

Guest Professor  <http://www.zju.edu.cn/english/> Zhejiang Univ., Hangzhou;
Visiting Professor,  <http://www.istic.ac.cn/Eng/brief_en.html> ISTIC,
Beijing;

Visiting Professor,  <http://www.bbk.ac.uk/> Birkbeck, University of London;


 <http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en>
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en

 

From: Loet Leydesdorff [mailto:loet at leydesdorff.net] 
Sent: Sunday, March 29, 2015 8:27 PM
To: 'ASIS&T Special Interest Group on Metrics'
Subject: RE: [SIGMETRICS] New Letter to the Editor

 

In my opinion, the standard indicator in a field is defined by its frequency
of professional use (and not by advantages and disadvantages of relevant
indicators). In other words, if professional bibliometricians (and not
amateur-bibliometricians) mostly use the MNCS (based on WoS subject
categories), this is the standard then.

 

Perhaps, this is an argument for “amateur-bibliometrics” J because the
suggestion of normalization in professional bibliometrics is—as you
claim—most of the time erroneous (e.g., Mingers, 2014). 

 

Best,

Loet

 

 

Reference: 

Mingers, J. (2014). Problems with SNIP. Journal of Informetrics, 8(4),
890-894.

 





 

-- 

Dr Jonathan Adams

Chief Scientist, Digital Science

Visiting Professor, King's College London

http://www.kcl.ac.uk/sspp/policy-institute/people/kpi-visiting/adams.aspx

 

M/ +44 7964 908449

E/ j.adams at digital-science.com


Macmillan Publishers Ltd

The Glasshouse Building

2 Trematon Walk

(via Wharfdale Road)

London N1 9FN, UK

http://www.timeshighereducation.co.uk/news/research/research-intelligence-pr
oof-is-in-the-numbers/411118.article

 

 

****************************************************************************
****   

DISCLAIMER: This e-mail is confidential and should not be used by anyone who
is

not the original intended recipient. If you have received this e-mail in
error

please inform the sender and delete it from your mailbox or any other
storage

mechanism. Neither Macmillan Digital Science Limited nor any of its agents
accept

liability for any statements made which are clearly the sender's own and not

expressly made on behalf of Macmillan Digital Science Limited or one of its
agents.

Please note that neither Macmillan Digital Science Limited nor any of its
agents

accept any responsibility for viruses that may be contained in this e-mail
or

its attachments and it is your responsibility to scan the e-mail and 

attachments (if any). No contracts may be concluded on behalf of Macmillan
Digital Science Limited or its agents by means of e-mail communication.
Macmillan 

Digital Science Limited Registered in England and Wales with registered
number 7397265 

Registered Office Brunel Road, Houndmills, Basingstoke RG21 6XS   

****************************************************************************
****

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150330/62a6ae46/attachment.html>


More information about the SIGMETRICS mailing list