Papers

Loet Leydesdorff loet at LEYDESDORFF.NET
Thu Apr 17 02:14:30 EDT 2014


Dear Jesper, 

 

Is there anything new to add to this debate? We thought that referencing the
argument would be sufficient in this context. 

 

At the time, we responded more fully in Bornmann & Leydesdorff (2013) and
Leydesdorff (2013), and added power analysis (Cohen, 1988) to the
statistical test of the Leiden (2011) rankings, available at
http://www.leydesdorff.net/leiden11 (Leydesdorff & Bornmann, 2012) in
response to your contributions (Schneider 2012 and 2013). 

 

In my opinion, the issue of using significance testing, confidence
intervals, and/or power analysis is to be decided from the perspective of
the functionality of answering research questions. Otherwise, the debate
tends to remain meta-theoretical and one risks to become repetitive.

 

Best,

Loet

 

 

References

Bornmann, L., & Leydesdorff, L. (2013). Statistical Tests and Research
Assessments: A comment on Schneider (2012). Journal of the American Society
for Information Science and Technology, 64(6), 1306-1308. 

Leydesdorff, L. (2013). Does the specification of uncertainty hurt the
progress of scientometrics? Journal of Informetrics, 7(2), 292-293. 

Leydesdorff, L., & Bornmann, L. (2012). Testing Differences Statistically
with the Leiden Ranking. Scientometrics, 92(3), 781-783.

Schneider, J. W. (2012). Testing University Rankings Statistically: Why this
Perhaps is not such a Good Idea after All. Some Reflections on Statistical
Power, Effect Size, Random Sampling and Imaginary Populations. In É.
Archambault, Y. Gingras & V. Larivière (Eds.), Science & Technology
Indicators (STI) 2012 (Vol. 2, pp. 719-732). Montreal: Universite de Quebec
a Montreal.

Schneider, J. W. (2013). Caveats for using statistical significance test in
research assessments. Journal of Informetrics, 7(1), 50-62.

 

-----Original Message-----
From: ASIS&T Special Interest Group on Metrics
[mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jesper Wiborg Schneider
Sent: Wednesday, April 16, 2014 8:55 PM
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: Re: [SIGMETRICS] Papers

 

 <http://web.utk.edu/~gwhitney/sigmetrics.html>
http://web.utk.edu/~gwhitney/sigmetrics.html

 

Dear Lutz,

 

Interesting paper, the latter one, and interesting to see how the 'debate'
in our field is reflected in the references you and your coauthor give:

 

"In bibliometrics, it has been also recommended to go beyond statistical
significance testing (Bornmann & Leydesdorff, 2013; Schneider, 2012)."

 

I guess you can call this quote an understatement, at least from my
perspective. I do not think anyone recommended to go 'beyond statistical
significance testing' in scientometrics/bibliometrics before I criticized
the current practice in 'Caveats for using statistical significance tests in
research assessments" first published in the Arxiv in 2011:
<http://arxiv.org/abs/1112.2516> http://arxiv.org/abs/1112.2516 and later in
2013 in Journal of Informetrics. 

In 2012, at the STI conference I exteneded the critic in the paper you
mention in the quote, discussing one of your papers on university rankings
and exemplifying the use of effect sizes in relation to such rankings, in
fact the use of Cohen's h in relation to the proportion of top 10 percent
highly cited papers - basically the same example you bring forward in this
paper.

Only then - as far as I can follow the ever faster publishing chronology -
did you and other colleagues react to some my criticisms, including an
endorsement of the use of effect sizes and CI until then not visible.

Now I do not hunger for more references or the like, but I would appreciate
that when we in the community have a debate or thread that such a
debate/thread is outlined thoroughly and honestly in the review section -
the purpose with a review. This case is not the first one and it gives one
the impression that our literature is not read ... or worse ...? I am not
sure whether this paper is under review, but I guess me writing this mail is
the risk you run when announcing this on the this list. 

 

Kind regards Jesper

 

 

 

________________________________________

From: ASIS&T Special Interest Group on Metrics [SIGMETRICS at LISTSERV.UTK.EDU]
on behalf of Bornmann, Lutz [lutz.bornmann at GV.MPG.DE]

Sent: 16 April 2014 15:53

To:  <mailto:SIGMETRICS at LISTSERV.UTK.EDU> SIGMETRICS at LISTSERV.UTK.EDU

Subject: [SIGMETRICS] Papers

 

BRICS countries and scientific excellence: A bibliometric analysis of most
frequently-cited papers Lutz Bornmann<
<http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>
http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>, Caroline Wagner<
<http://arxiv.org/find/cs/1/au:+Wagner_C/0/1/0/all/0/1>
http://arxiv.org/find/cs/1/au:+Wagner_C/0/1/0/all/0/1>, Loet Leydesdorff<
<http://arxiv.org/find/cs/1/au:+Leydesdorff_L/0/1/0/all/0/1>
http://arxiv.org/find/cs/1/au:+Leydesdorff_L/0/1/0/all/0/1>

 

(Submitted on 14 Apr 2014)

 

The BRICS countries (Brazil, Russia, India, and China, and South Africa) are
noted for their increasing participation in science and technology. The
governments of these countries have been boosting their investments in
research and development to become part of the group of nations doing
research at a world-class level. This study investigates the development of
the BRICS countries in the domain of top-cited papers (top 10% and 1% most
frequently cited papers) between 1990 and 2010. To assess the extent to
which these countries have become important players on the top level, we
compare the BRICS countries with the top-performing countries worldwide. As
the analyses of the (annual) growth rates show, with the exception of
Russia, the BRICS countries have increased their output in terms of most
frequently-cited papers at a higher rate than the top-cited countries
worldwide. In a further step of analysis for this study, we generate
co-authorship networks among authors of highly cited papers for four time
points to view changes in BRICS participation (1995, 2000, 2005, and 2010).
Here, the results show that all BRICS countries succeeded in becoming part
of this network, whereby the Chinese collaboration activities focus on the
USA.

 

Available at:  <http://arxiv.org/abs/1404.3721>
http://arxiv.org/abs/1404.3721

 

 

The substantive and practical significance of citation impact differences
between institutions: Guidelines for the analysis of percentiles using
effect sizes and confidence intervals Richard Williams<
<http://arxiv.org/find/cs/1/au:+Williams_R/0/1/0/all/0/1>
http://arxiv.org/find/cs/1/au:+Williams_R/0/1/0/all/0/1>, Lutz Bornmann<
<http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>
http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>

 

(Submitted on 12 Apr 2014)

 

In our chapter we address the statistical analysis of percentiles: How
should the citation impact of institutions be compared? In educational and
psychological testing, percentiles are already used widely as a standard to
evaluate an individual's test scores - intelligence tests for example - by
comparing them with the percentiles of a calibrated sample. Percentiles, or
percentile rank classes, are also a very suitable method for bibliometrics
to normalize citations of publications in terms of the subject category and
the publication year and, unlike the mean-based indicators (the relative
citation rates), percentiles are scarcely affected by skewed distributions
of citations. The percentile of a certain publication provides information
about the citation impact this publication has achieved in comparison to
other similar publications in the same subject category and publication
year. Analyses of percentiles, however, have not always been presented in
the most effective and meaningful way. New APA guidelines (American
Psychological Association, 2010) suggest a lesser emphasis on significance
tests and a greater emphasis on the substantive and practical significance
of findings. Drawing on work by Cumming (2012) we show how examinations of
effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to
a clear understanding of citation impact differences.

 

Available at:  <http://arxiv.org/abs/1404.3720>
http://arxiv.org/abs/1404.3720

 

---------------------------------------

 

Dr. Dr. habil. Lutz Bornmann

Division for Science and Innovation Studies Administrative Headquarters of
the Max Planck Society Hofgartenstr. 8

80539 Munich

Tel.: +49 89 2108 1265

Mobil: +49 170 9183667

Email:  <mailto:bornmann at gv.mpg.de%3cmailto:bornmann at gv.mpg.de>
bornmann at gv.mpg.de<mailto:bornmann at gv.mpg.de>

WWW:  <http://www.lutz-bornmann.de%3chttp:/www.lutz-bornmann.de/>
www.lutz-bornmann.de<http://www.lutz-bornmann.de/>

ResearcherID:  <http://www.researcherid.com/rid/A-3926-2008>
http://www.researcherid.com/rid/A-3926-2008

ResearchGate:  <http://www.researchgate.net/profile/Lutz_Bornmann>
http://www.researchgate.net/profile/Lutz_Bornmann

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20140417/dbc4d45f/attachment.html>


More information about the SIGMETRICS mailing list