[Sigmetrics] New paper

Bornmann, Lutz lutz.bornmann at gv.mpg.de
Mon Jun 26 02:37:54 EDT 2017


Dear Loet and William,

We still have your open questions/comments to our study “Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data” (https://arxiv.org/abs/1706.06515).

Comment by William: But I don't see any evidence for the assertion that the lists will probably be more reliable. I'm asking because it seems rather counterintuitive that an automatically generated list that can be edited by an author would be better than a list manually created by an author. Indeed, at Mendeley we have author profiles that are manually created & we're moving to automatically adding publications to them, using Scopus, because the lists are often incomplete.

Answer: The problem is that many Scopus profiles are not edited by the authors. In my opinion, it would be helpful if Elsevier provides the information whether a publication list had been manually (and continuously) edited or not.

1. Comment by Loet: One cannot conclude from correlations to causality.
Answer: There could be causal relationship, along the lines of the Matthew effect – those who have early success are given more (resources, grants, good students, whatever) which makes them have even more success later. Or, it could be a spurious relationship – the qualities that make people publish in top journals early on may cause them to publish successfully later (in terms of citations). But, even if a relationship is spurious, it doesn’t mean that it can’t be used for selection and prediction. (e.g. if your big toe starts hurting and then it rains, that doesn’t mean your toe caused it to rain! But the same atmospheric relationships that caused it rain may have caused your toe to hurt – so your toe can be a good predictor of the weather even if the relationship isn’t causal.)
2. Comment by Loet: Should the ANOVA not be Bonferroni-corrected? These weak correlations may be non-signifcant.
Answer: This correction is as a rule necessary in multiple, pair-wise comparisons which might follow the ANOVA. However, we abstained from calculating these comparisons. Even if we took the unusual and questionable step of applying Bonferroni, the results would continue to be statistically highly significant.

3. Comment by Loet: Are you able to specify the chance that the prediction is wrong in an individual case (like a hiring decision)?
Answer: We are certainly not saying these relationships are deterministic. While early success is correlated with later success, we do not say it guarantees it, and we caution against only relying on the JIF.


From: loet at leydesdorff.net [mailto:leydesdorff at gmail.com] On Behalf Of Loet Leydesdorff
Sent: Thursday, June 22, 2017 12:03 PM
To: Bornmann, Lutz; 'William Gunn'
Cc: SCISIP at listserv.nsf.gov; 'Richard Williams'; sigmetrics at mail.asis.org
Subject: RE: [Sigmetrics] New paper

Dear Lutz,

The inference from the journal level to the individual remains vulnerable as an ecological fallacy:


  1.  One cannot conclude from correlations to causality;
  2.  Should the ANOVA not be Bonferroni-corrected? These weak correlations may be non-signifcant.
  3.  Are you able to specify the chance that the prediction is wrong in an individual case (like a hiring decision)?

Best,
Loet


________________________________
Loet Leydesdorff
Professor, University of Amsterdam
Amsterdam School of Communication Research (ASCoR)
loet at leydesdorff.net <mailto:loet at leydesdorff.net> ; http://www.leydesdorff.net/
Associate Faculty, SPRU, <http://www.sussex.ac.uk/spru/> University of Sussex;
Guest Professor Zhejiang Univ.<http://www.zju.edu.cn/english/>, Hangzhou; Visiting Professor, ISTIC, <http://www.istic.ac.cn/Eng/brief_en.html> Beijing;
Visiting Fellow, Birkbeck<http://www.bbk.ac.uk/>, University of London;
http://scholar.google.com/citations?user=ych9gNYAAAAJ&hl=en


From: SIGMETRICS [mailto:sigmetrics-bounces at asist.org] On Behalf Of Bornmann, Lutz
Sent: Thursday, June 22, 2017 10:33 AM
To: William Gunn <william.gunn at gmail.com<mailto:william.gunn at gmail.com>>
Cc: SCISIP at listserv.nsf.gov<mailto:SCISIP at listserv.nsf.gov>; Richard Williams <Richard.A.Williams.5 at nd.edu<mailto:Richard.A.Williams.5 at nd.edu>>; SIGMETRICS (sigmetrics at mail.asis.org<mailto:sigmetrics at mail.asis.org>) <sigmetrics at mail.asis.org<mailto:sigmetrics at mail.asis.org>>
Subject: Re: [Sigmetrics] New paper

Dear William,

Many thanks for your comments! Please find my answers below:

From: William Gunn [mailto:william.gunn at gmail.com]
Sent: Wednesday, June 21, 2017 9:28 PM
To: Bornmann, Lutz
Cc: SCISIP at LISTSERV.NSF.GOV<mailto:SCISIP at LISTSERV.NSF.GOV>; SIGMETRICS (sigmetrics at mail.asis.org<mailto:sigmetrics at mail.asis.org>)
Subject: Re: [Sigmetrics] New paper

Hi Lutz,
I've read your paper with interest & I think the analysis is well done, though I have to say pre-registration of your study would have strengthened the findings, given the small effect sizes you report.
I had a few questions & would be grateful for any response:
The main question I had was if you plan to do any follow-up work to disentangle the correlation between presence at an elite institution, publication in a high IF journal, and higher mean or total normalized citations. It seems to me, not being as familiar with the trends among indicators as you, that you have provided nearly equal support for two different ways of picking early investigators likely to be productive: picking them according to Q1 as you describe or picking the ones which are at elite institutions early in their career (as well as picking according to number of papers). Just wondering if you're planning to try to get at causality in some way among these interrelated factors?
It would be definitively interesting to undertake follow-up studies (and to consider further variables, such as institutions or disciplines). These can (will) be done by ourselves, but I hope that other people will do this, too.
Other things that occurred to me during reading:
Why do you think profiles manually created by researchers will be better than profiles automatically generated and then edited?

In the paper, we explain this as follows: „RID provides a possible solution to the author ambiguity problem within the scientific community. The problem of polysemy means, in this context, that multiple authors are merged in a single identifier; the problem of synonymy entails multiple identifiers being available for a single author (Boyack, Klavans, Sorensen, & Ioannidis, 2013). Each researcher is assigned a unique identifier in order to manage his or her publication list. The difference between this and similar services provided by Elsevier within the Scopus database is that Elsevier automatically manages the publication profiles of researchers (authors), with the profiles being able to be manually revised. With RID, researchers themselves take the initiative, create a profile, and manage their publication lists. Although it cannot be taken for granted that the publication lists on RID are error-free, these lists will probably be more reliable than the automatically generated lists (by Elsevier)”.

Instead of using publication early in the career and publication late in career to define a cohort which presumably published continuously, couldn't you write a query, since you have the data, to actually select only those who have indeed published continuously?

We will publish further results with additional data. It would be definitively interesting to classify the researchers into different groups (as you recommend).

Am I correct that the main difference between the three figures is that there's a smaller time window in 2 than 1 and 3 than 2?

Yes, this is correct.

Could you explain the reversion in mean citations of the upper cohorts over time in terms of the divided attention allocated to the increased overall publication output? In other words, could it be that as the overall number of publications grows, attention gets further divided and mean citation rates fall?

An interesting interpretation! Your interpretation might be correct, if the impact of publications is mainly triggered by the authors‘ names and less by the content of the single papers and/or if the authors have published several similar papers which can be simultaneously cited.

Would you expect to see the same results using CiteScore?

Yes, both metrics measure journal impact similarly. The most important thing is to use these metrics in normalized variants.

Again, grateful for any response!


William Gunn
+1 (650) 614-1749
http://synthesis.williamgunn.org/about/

On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz <lutz.bornmann at gv.mpg.de<mailto:lutz.bornmann at gv.mpg.de>> wrote:
Dear colleague,

You might be interested in the following paper:

Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data

Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year - not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered.

Available at: https://arxiv.org/abs/1706.06515

Best,

Lutz

---------------------------------------

Dr. Dr. habil. Lutz Bornmann
Division for Science and Innovation Studies
Administrative Headquarters of the Max Planck Society
Hofgartenstr. 8
80539 Munich
Tel.: +49 89 2108 1265<tel:+49%2089%2021081265>
Mobil: +49 170 9183667<tel:+49%20170%209183667>
Email: bornmann at gv.mpg.de<mailto:bornmann at gv.mpg.de>
WWW: www.lutz-bornmann.de<http://www.lutz-bornmann.de/>
ResearcherID: http://www.researcherid.com/rid/A-3926-2008
ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann


_______________________________________________
SIGMETRICS mailing list
SIGMETRICS at mail.asis.org<mailto:SIGMETRICS at mail.asis.org>
http://mail.asis.org/mailman/listinfo/sigmetrics

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20170626/050ac192/attachment-0001.html>


More information about the SIGMETRICS mailing list