New Letter to the Editor

Loet Leydesdorff loet at LEYDESDORFF.NET
Tue Mar 31 02:47:20 EDT 2015


Furthermore, it seems that the different normalization methods produce similar results (see http://www.sciencedirect.com/science/article/pii/S1751157715000073).

 

Dear Lutz, 

 

I read the paper. In my opinion, this conclusion (in the email) is overstated. The results are similar in some specific respects, namely in relation to F1000 scores and hence for the biomedical sciences. The latter are well-known for specifically having a research front (Price, 2000). These fields provided the model for the IF (Martyn & Gilchrist, 1968; Bensman, 2007).

 

Figure 1 first shows that MNCS indeed normalizes to the mean; percentiles (Hazen) all are approximately 50 on average. These are analytically expected results. The source-normalized indicators are empirical, and can thus show more variation. Average citation, of course, fluctuates among fields. 

 

Table 3 then shows rank correlations of approximately .30 as the highest between the indicators and F1000 scores. Table 4 shows that the “Excellent” scores of F1000 are the best predictors of non-normalized citation scores, but with more error than percentile-normalized citation scores or MNCS. The scores “Very good” are better predictors of the percentile-based scores (.36) than MNCS (.27), and the source-normalized scores are somewhat in-between (.29).

 

Correlations at the aggregate level do not inform us about the quality of various indicators at the disaggregated level of, for example, institutional sets. (Jonathan made this point before.) For example, I once saw the results of an evaluation by CWTS of my own institute (ASCoR) and rather small differences matter sometimes quite a bit.

 

Answering Christina: 

I would use fractionally counted, percentile normalized citation counts if you don’t wish to buy the results from “professional bibliometricians”. You will probably have to do the fractional counting by using (1/NRef) in the citing papers. (This is SNCS2 in the Leiden abbreviations system?) 

 

It is not the best way, but it is doable. You can perhaps use i3.exe at http://www.leydesdorff.net/software/i3, but you would have to change the TC value by the fractionally counted ones in the input files. (I am not sure that everything works because I made this four years ago. Feel free to let me know if there are problems.) Use the Hazen (1914) option for percentile ranking.

 

The fractional counting enables you to prevent using WoS Subject Categories; percentiles are better than means because of the skew in the distributions. The results may be very different from using MNCS (which is “the standard practice”). Some evaluees may not like the results. I would use WoS and not Scopus because of the trade journals in the latter database. (The differences may be small, but it may matter for fractional counting.) 

 

I don’t share the expectation of concordance that some of our colleagues seem to entertain. The differences may be interesting. Since you would have all the data, you can show the differences, and also compare with the commercial set from Leiden (if you buy this data). The work is mainly collecting the data (TC) and the NRef values of the citing papers.

 

Best,

Loet

 

 

References:

Bensman, S. J. (2007). Garfield and the impact factor. Annual Review of Information Science and Technology, 41(1), 93-155. 

Martyn, J., & Gilchrist, A. (1968). An Evaluation of British Scientific Journals. London: Aslib.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20150331/0c840d1a/attachment.html>


More information about the SIGMETRICS mailing list