[Sigmetrics] "Use of the journal impact factor for assessing individual articles need not be wrong" by Ludo Waltman and Vincent Traag

Mark C. Wilson mc.wilson at auckland.ac.nz
Fri Mar 10 13:07:10 EST 2017


> On 11/03/2017, at 06:49 , David Wojick <dwojick at craigellachie.us> wrote:
> 
> I have a rather different conjecture as to why the IF is a useful proxy for article quality. It has to do with the tiered structure of the system of journals. Journals with high IFs tend to attract many submissions and therefore have high rejection rates. Thus there is extensive article level evaluation.

I don’t see how this last part follows. Indeed, don’t such glamour journals use extensive desk rejection by editors not qualified to referee the paper in detail?

> Only those articles judged to be of the highest quality get published. It follows that getting published in a high IF journal is indicative of relatively high quality for the article.
> 
> Opponents of IF argue that articles should be individually evaluated at the institutional level, but the high IF journal is better positioned to do this, because it has a large sample from many authors.
> 
> David
> 
> David Wojick, Ph.D.
> http://insidepublicaccess.com/
> 
> At 07:17 AM 3/10/2017, Loet Leydesdorff wrote:
> 
>> re: "Use of the journal impact factor for assessing individual articles need not be wrong" by Ludo Waltman and Vincent Traag at https://www.cwts.nl/blog… 
>> 
>> I agree with the authors that there are two different arguments against using the impact factor of a journal (IF) as a proxy for the quality of papers in the journal: (1) the skewness of the citation distribution, and (2) the ecological fallacy.
>> 
>> 1. Against argument 1, the authors reason as follows: Let us assume (in scenario 2, at p. 16) that “journals are relatively homogenous in terms of the values of the articles they publish.” This relatively flat distribution of the non-observable “values” is for (unknown) statistical reasons represented by the skewed distribution of citations to these articles. The latter distribution can be observed. In this scenario, a journal measure such as the journal impact factor­in other words, the mean­could be a better predictor of the “value” of an article than its individual citation rate. 
>> 
>> Unlike the reasoning of others who criticize the use of the IF for the evaluation of individual papers, the reasoning above would be free of assumptions. <127d7be.png> :-) 
>> 
>> 2. Let me add that the ecological fallacy (Robinson, 1950) does not imply that the value of an attribute to an individual is independent of the value at the group level, but that the latter may fail as a predictor of the former. One loses control of the prediction: in some cases it works; in others not. 
>> 
>> See: Kreft, G. G., & de Leeuw, E. (1988). The see-saw effect: A multilevel problem? Quality and Quantity, 22(2), 127-137. 
>> Abstract: Studies of school effectiveness often use measures of association, such as regression weights and correlation coefficients. These statistics are used to estimate the size of the change or “effect” that would occur in one variable (for example reading ability) given a particular change in another variable (for example sex and sex ratio). In this paper we explore the limitations of regression coefficients for use in a contextual analysis, in which both individual and contextual variables are included as independent variables. In our example “individual sex” and a context variable “sex ratio of the schoolclass” are regressors, and reading ability is the dependent variable. Our conclusion is that researchers should be careful making interpretations of effects from multiple regression analysis, when dealing with aggregate data. Even in the case (as in our example) when individual and contextual variables are made orthogonal to avoid multicollinearity, interpretation of the effects of the aggregate variable is problematical.
>> 
>> See also:
>> 
>> ·       Robinson, W. D. (1950). Ecological correlations and the behavior of individuals. American Sociological Review, 15, 351-357. 
>> 
>> ·       Leydesdorff, L., Wouters, P., & Bornmann, L. (2016). Professional and citizen bibliometrics: complementarities and ambivalences in the development and use of indicators­a state-of-the-art report. Scientometrics, 109(3), 2129-2150. 
>> 
>> Best wishes, 
>> Loet
>>  
>> 
>> Loet Leydesdorff 
>> Professor, University of Amsterdam
>> Amsterdam School of Communication Research (ASCoR)
>> loet at leydesdorff.net ; http://www.leydesdorff.net/ 
>> Associate Faculty, SPRU, University of Sussex; 
>> Guest Professor Zhejiang Univ., Hangzhou; Visiting Professor, ISTIC, Beijing;
>> Visiting Fellow, Birkbeck, University of London; 
>> http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en
>>  
>> 
>> 
>> _______________________________________________
>> SIGMETRICS mailing list
>> SIGMETRICS at mail.asis.org
>> http://mail.asis.org/mailman/listinfo/sigmetrics
> _______________________________________________
> SIGMETRICS mailing list
> SIGMETRICS at mail.asis.org
> http://mail.asis.org/mailman/listinfo/sigmetrics




More information about the SIGMETRICS mailing list