PLOS ONE Output Falls Following Impact Factor Decline

Colin Paul Gloster Colin_Paul_Gloster at ACM.ORG
Fri May 30 12:59:01 EDT 2014



On March 7th, 2014, Philip Davis sent:
|-------------------------------------------------------------------------------------------------------------------------|
|"Can the recent drop in February PLOS ONE publication figures be explained by a decline in their Impact Factor last June?|
|                                                                                                                         |
|see:                                                                                                                     |
|PLOS ONE Output Falls Following Impact Factor Decline                                                                    |
| http://wp.me/pcvbl-9sV "                                                                                                |
|-------------------------------------------------------------------------------------------------------------------------|

It was claimed in
John Araujo, Neelam D. Ghiya, Angela Calugar & Tanja Popovic, "Analysis of Three Factors Possibly Influencing the Outcome of a Science Review Process", "Accountability in Research: Policies and Quality Assurance", Volume 21, Issue 4, 2014, Pages 241-264,
WWW.TandFonline.com/doi/full/10.1080/08989621.2013.848798#.U4i093ZFkwo
: "We analyzed a process for the annual selection of a Federal agency's best peer-reviewed, scientific papers with the goal to develop a relatively simple method that would use publicly available data to assess the presence of factors, other than scientific excellence and merit, in an award-making process that is to recognize scientific excellence and merit. Our specific goals were (a) to determine if journal, disease category, or major paper topics affected the scientific-review outcome by (b) developing design and analytic approaches to detect potential bias in the scientific review process. While indeed journal, disease category, and major paper topics were unrelated to winning, our methodology was sensitive enough to detect differences between the ranks of journals for winners and non-winners.

[. . .]

The motivation for this study arose from the review process associated with the most prestigious science award available to scientists at the U.S. Centers for Disease Control and Prevention (CDC)--The Charles C. Shepard Science Award. This science award seeks to recognize the premier science conducted by CDC scientists, or in collaboration with scientists around the world, and the award attests to scientific excellence via the published work of CDC's scientists. The award has been made annually since its inception in 1986. [. . .]

[. . .]

CDC maintains a cumulative, year-by-year electronic bibliography of finalists’ papers (including winners’ papers, which are a subset of the finalists’ papers) for the Charles C. Shepard Science Award (SSA) (Office of the Associate Director for Science, 2011). [. . .]

[. . .]

Journal Impact Factor (JIF) [. . .]

[. . .]

We investigated the possible influence of various factors upon winner selection, as we noted above. CDC science regularly appears in journals with high visibility, such as the New England Journal of Medicine (JIF = 50.017), Journal of the American Medical Association (JIF = 31.718), Lancet (JIF = 28.409), and Science (JIF = 28.103). However, we did not observe a bias towards winner selection based on the journal title. Our analyses suggest that winning papers were not merely a subset, by journal title, of the overall scientific community of SSA papers because the frequency of appearance by journal title was different between finalists and winners, and winners published more frequently in lower ranked journals. Our other measure of journal importance, JIF, did not reveal a preference for papers appearing in high impact journals. This piece of evidence indicates that members of the three SSA winner selection committees have, year-after-year, executed their responsibility for identifyin
 g award winning work based on scientific merit and impact rather than on the journal “popularity” or “prestige” in which papers have been published, thus reinforcing the thinking that excellence should be worthy of inherent recognition, independent of the vehicle making it visible to a community of accomplished scientists.

[. . .]"

Regards,
Paul Colin Gloster


More information about the SIGMETRICS mailing list