AW: [SIGMETRICS] New Papers

Bornmann, Lutz Lutz.Bornmann at GV.MPG.DE
Mon Oct 18 14:25:03 EDT 2010


Dear Neil,
 
<In a court of law, that would be called a "leading question", because it
assumes the answer. Who decides the quality of <the submitted manuscripts?
The reviewers, at least in this context (though ultimately it is the readers
and the scientific <community who decide). If there were an objective measure
of quality, we could do without peer review entirely.
 
It is not necessary to decide the quality of the submitted manuscript. We
controlled the quality of the submitted manuscripts by using the so-called
within-manuscript analysis: We included in the analyses only those
manuscripts which were rated by a pair of an author-suggested (Ra) and an
editor-suggested reviewer (n=135 manuscripts). 22% of the final publication
recommendations made by Ra are more positive than those made by Re. There are
more positive recommendations by Re than by Ra for only 11% of the
manuscripts (there is no difference between the recommendations by the two
reviewer groups for 67% of the manuscripts). Hence, overall for this group of
manuscripts Ra rated more favorably than Re more frequently than vice versa.
The result of the marginal homogeneity test show that the ratings by Ra and
Re differed statistically significantly.
 
We could further show for a smaller number of manuscript that with the same
ratings on all evaluation criteria (scientific quality, scientific
significance and presentation quality), Ra tend to make a more positive final
publication recommendation than Re. 
 
Best,
 
Lutz
 

________________________________

Von: ASIS&T Special Interest Group on Metrics im Auftrag von Smalheiser, Neil
Gesendet: Mo 18.10.2010 17:22
An: SIGMETRICS at listserv.utk.edu
Betreff: Re: [SIGMETRICS] New Papers




I think the most important question here is: Is it fair that one group of
reviewers rate manuscripts more favourably than another group - independently
of the quality of the submitted manuscripts?

Dear Lutz,

In a court of law, that would be called a "leading question", because it
assumes the answer. Who decides the quality of the submitted manuscripts? The
reviewers, at least in this context (though ultimately it is the readers and
the scientific community who decide). If there were an objective measure of
quality, we could do without peer review entirely.

This also assumes that authors tend to recommend their personal friends who
have conflicts of interest and who are less qualified to give objective
opinions than the ones chosen by editors. In my own experience as an editor,
authors tend to recommend leaders in their field as potential reviewers,
often the same people I would have chosen as reviewers; and the minority of
authors who recommend outliers are quickly flagged and over-ridden.

Moreover, there is also the hidden assumption that authors act to maximize
the chances that their manuscripts will be accepted. I have to tell you that
my latest research paper (on endogenous siRNAs in brain) is on a
controversial subject, so I deliberately sent it to the most mainstream
journal in the field where it would be scrutinized by the most skeptical
molecular biologists -- rather than sending it to some neuroscience journal
where it might be accepted more readily. Why? Because I know that molecular
biologists won't believe something they read in a neuroscience journal;
unless they have blessed it themselves, it does not exist. I did submit a
list of 5 potential reviewers, who I knew were familiar with neuroscience --
whereas most of the journal's reviewers deal with yeast or C. elegans. This
probably did help my paper win acceptance, because reviewers who are familiar
with the specific topic are more likely to understand the novelty and
innovation better than other!
 s.

In terms of fairness, the answer to your question is a strong YES. If I
submit a manuscript, and it has errors or embarrassing flaws, I would expect
my friends to be especially alert and protective on my behalf! Conversely, I
can give you many examples where leading journals have relied on certain
prominent reviewers who have prevented publication of innovative articles and
thereby have ruined careers [one cannot be funded if one cannot publish the
findings] and slowed down entire research fields. These reviewers think that
they are being objective and simply insisting on quality. The
author-suggested reviewers is one of the few ways around this roadblock.
Indeed, historically, one of the main reasons that new journals arise is
because a certain type or line of research is not taken seriously by the
established journals. Just this year, the ACM (computer science assn.) has
established their own intl. conference and journal, because the existing
society (AMIA) does not recogn!
 ize or appreciate computer science-oriented submissions in their own
conference or journal. I don't think AMIA has any higher quality standards
than ACM. AMIA reviewers simply do not resonate with the types of questions
and approaches that CS authors have -- yet I bet THEY feel that they are
acting as quality gatekeepers.

I think there is a larger question here, which is whether one can usefully
analyze scientific behavior strictly from the outside, in a black-box
input-output manner, without modeling the internal machinery. YES, I do
support that perspective (and if my colleague Vetle Torvik is listening, this
is similar to the data-mining analysis of collaboration networks that he is
initiating). However, it is only one facet of an overall analysis that also
has to understand the behavior of scientists in terms of their own stated
practice and reasons.

Neil Smalheiser



More information about the SIGMETRICS mailing list