[Sigmetrics] New paper

William Gunn william.gunn at gmail.com
Thu Jun 22 15:37:36 EDT 2017


Thanks very much for the responses. One follow-up, if I may. You
state: Although
it cannot be taken for granted that the publication lists on RID are
error-free, these lists will probably be more reliable than the
automatically generated lists (by Elsevier)”.

But I don't see any evidence for the assertion that the lists will probably
be more reliable. I'm asking because it seems rather counterintuitive that
an automatically generated list that can be edited by an author would be
better than a list manually created by an author. Indeed, at Mendeley we
have author profiles that are manually created & we're moving to
automatically adding publications to them, using Scopus, because the lists
are often incomplete.

William Gunn
+1 (650) 614-1749
http://synthesis.williamgunn.org/about/

On Jun 22, 2017 1:34 AM, "Bornmann, Lutz" <lutz.bornmann at gv.mpg.de> wrote:

Dear William,



Many thanks for your comments! Please find my answers below:



*From:* William Gunn [mailto:william.gunn at gmail.com]
*Sent:* Wednesday, June 21, 2017 9:28 PM
*To:* Bornmann, Lutz
*Cc:* SCISIP at LISTSERV.NSF.GOV; SIGMETRICS (sigmetrics at mail.asis.org)
*Subject:* Re: [Sigmetrics] New paper



Hi Lutz,

I've read your paper with interest & I think the analysis is well done,
though I have to say pre-registration of your study would have strengthened
the findings, given the small effect sizes you report.

I had a few questions & would be grateful for any response:

The main question I had was if you plan to do any follow-up work to
disentangle the correlation between presence at an elite institution,
publication in a high IF journal, and higher mean or total normalized
citations. It seems to me, not being as familiar with the trends among
indicators as you, that you have provided nearly equal support for two
different ways of picking early investigators likely to be productive:
picking them according to Q1 as you describe or picking the ones which are
at elite institutions early in their career (as well as picking according
to number of papers). Just wondering if you're planning to try to get at
causality in some way among these interrelated factors?

It would be definitively interesting to undertake follow-up studies (and to
consider further variables, such as institutions or disciplines). These can
(will) be done by ourselves, but I hope that other people will do this, too.

Other things that occurred to me during reading:
Why do you think profiles manually created by researchers will be better
than profiles automatically generated and then edited?



In the paper, we explain this as follows: „RID provides a possible solution
to the author ambiguity problem within the scientific community. The
problem of polysemy means, in this context, that multiple authors are
merged in a single identifier; the problem of synonymy entails multiple
identifiers being available for a single author (Boyack, Klavans, Sorensen,
& Ioannidis, 2013). Each researcher is assigned a unique identifier in
order to manage his or her publication list. The difference between this
and similar services provided by Elsevier within the Scopus database is
that Elsevier automatically manages the publication profiles of researchers
(authors), with the profiles being able to be manually revised. With RID,
researchers themselves take the initiative, create a profile, and manage
their publication lists. Although it cannot be taken for granted that the
publication lists on RID are error-free, these lists will probably be more
reliable than the automatically generated lists (by Elsevier)”.



Instead of using publication early in the career and publication late in
career to define a cohort which presumably published continuously, couldn't
you write a query, since you have the data, to actually select only those
who have indeed published continuously?



We will publish further results with additional data. It would be
definitively interesting to classify the researchers into different groups
(as you recommend).



Am I correct that the main difference between the three figures is that
there's a smaller time window in 2 than 1 and 3 than 2?



Yes, this is correct.



Could you explain the reversion in mean citations of the upper cohorts over
time in terms of the divided attention allocated to the increased overall
publication output? In other words, could it be that as the overall number
of publications grows, attention gets further divided and mean citation
rates fall?



An interesting interpretation! Your interpretation might be correct, if the
impact of publications is mainly triggered by the authors‘ names and less
by the content of the single papers and/or if the authors have published
several similar papers which can be simultaneously cited.



Would you expect to see the same results using CiteScore?



Yes, both metrics measure journal impact similarly. The most important
thing is to use these metrics in normalized variants.



Again, grateful for any response!



William Gunn
+1 (650) 614-1749 <(650)%20614-1749>
http://synthesis.williamgunn.org/about/



On Wed, Jun 21, 2017 at 5:26 AM, Bornmann, Lutz <lutz.bornmann at gv.mpg.de>
wrote:

Dear colleague,



You might be interested in the following paper:



Can the Journal Impact Factor Be Used as a Criterion for the Selection of
Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data



Early in researchers' careers, it is difficult to assess how good their
work is or how important or influential the scholars will eventually be.
Hence, funding agencies, academic departments, and others often use the
Journal Impact Factor (JIF) of where the authors have published to assess
their work and provide resources and rewards for future work. The use of
JIFs in this way has been heavily criticized, however. Using a large data
set with many thousands of publication profiles of individual researchers,
this study tests the ability of the JIF (in its normalized variant) to
identify, at the beginning of their careers, those candidates who will be
successful in the long run. Instead of bare JIFs and citation counts, the
metrics used here are standardized according to Web of Science subject
categories and publication years. The results of the study indicate that
the JIF (in its normalized variant) is able to discriminate between
researchers who published papers later on with a citation impact above or
below average in a field and publication year - not only in the short term,
but also in the long term. However, the low to medium effect sizes of the
results also indicate that the JIF (in its normalized variant) should not
be used as the sole criterion for identifying later success: other
criteria, such as the novelty and significance of the specific research,
academic distinctions, and the reputation of previous institutions, should
also be considered.



Available at: https://arxiv.org/abs/1706.06515



Best,



Lutz



---------------------------------------



Dr. Dr. habil. Lutz Bornmann

Division for Science and Innovation Studies

Administrative Headquarters of the Max Planck Society

Hofgartenstr. 8

80539 Munich

Tel.: +49 89 2108 1265 <+49%2089%2021081265>

Mobil: +49 170 9183667 <+49%20170%209183667>

Email: bornmann at gv.mpg.de

WWW: www.lutz-bornmann.de

ResearcherID: http://www.researcherid.com/rid/A-3926-2008

ResearchGate: http://www.researchgate.net/profile/Lutz_Bornmann




_______________________________________________
SIGMETRICS mailing list
SIGMETRICS at mail.asis.org
http://mail.asis.org/mailman/listinfo/sigmetrics
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20170622/aa2f7c97/attachment.html>


More information about the SIGMETRICS mailing list