[Sigmetrics] Quality versus quantity in scientific impact

Fil Menczer filmenczer at gmail.com
Wed Sep 2 10:15:32 EDT 2015


[Apologies for cross-posting]

The sigmetrics community may be interested in our new paper on
scholarly quality versus quantity. Is a scientist's apparent impact as
measured by citation metrics due to high quality or just publishing in
large quantity (minimum publication unit, salami publishing,
self-plagiarism, etc)? What is the impact needed to assess excellence
given one's number of publication?

Quality versus quantity in scientific impact
by Jasleen Kaur, Emilio Ferrara, Filippo Menczer, Alessandro Flammini,
Filippo Radicchi
Journal of Informetrics 9(4): 800–808, October 2015.
doi:10.1016/j.joi.2015.07.008
Free access at http://authors.elsevier.com/a/1Re9e6EAijZsXu

Abstract: Citation metrics are becoming pervasive in the quantitative
evaluation of scholars, journals, and institutions. Hiring, promotion,
and funding decisions increasingly rely on a variety of impact metrics
that cannot disentangle quality from quantity of scientific output,
and are biased by factors such as discipline and academic age. Biases
affecting the evaluation of single papers are compounded when one
aggregates citation-based metrics across an entire publication record.
It is not trivial to compare the quality of two scholars that during
their careers have published at different rates, in different
disciplines, and in different periods of time. Here we evaluate a
method based on the generation of a statistical baseline specifically
tailored on the academic profile of each researcher. We demonstrate
the effectiveness of the approach in decoupling the roles of quantity
and quality of publications to explain how a certain level of impact
is achieved. The method can be extended to simultaneously suppress any
source of bias. As an illustration, we use it to capture the quality
of the work of Nobel laureates irrespective of number of publications,
academic age, and discipline, even when traditional metrics indicate
low impact in absolute terms. The procedure is flexible enough to
allow for the evaluation of, and fair comparison among, arbitrary
collections of papers – scholar publication records, journals, and
institutions; in fact, it extends a similar technique that was
previously applied to the ranking of research units and countries in
specific disciplines (Crespo, Ortuño-Ortí, & Ruiz-Castillo, 2012). We
further apply the methodology to almost a million scholars and over
six thousand journals to measure the impact that cannot be explained
by the volume of publications alone.

Filippo Menczer
Professor of Informatics and Computer Science
Director, Center for Complex Networks and Systems Research
Indiana University, Bloomington
http://cnets.indiana.edu/people/filippo-menczer



More information about the SIGMETRICS mailing list