"Bibliometric Distortion": The Babblarazzi Are At It Again...

Jonathan Adams Jonathan.adams at EVIDENCE.CO.UK
Fri Nov 16 06:29:18 EST 2007


Stevan,
Following on from your comments at the end of last week (below) I agree
that it is possible tentatively to pick out 'over-production' of poor
quality papers (although I am less optimistic about the comprehensive
analytical detection of publication abuse you foresee).
By contrast to over-production, do you think that an assessment system
that looks at total output would run the risk of suppressing outputs
that might be predicted to be cited less frequently?
UK research assessment currently looks at four outputs per researcher,
usually selected by the individual as their best research.  The proposal
is that post-2008 the metrics assessment would be of all output,
creating a profile and then deriving a metric derived from that.
Is there a risk that researchers, realising that outputs aimed at
practitioners often appear in relatively lower impact journals, would
then tend to reduce the number of papers they produced aimed at
transferring knowledge from the research base and concentrate on outputs
targeted at high-impact journals in the research-base core?  They would
expect by doing so to avoid dilution of their citation average.
The net effect could be to reduce the UK's volume of less frequently
cited papers, but also to reduce information flow to the people who turn
research into practice.

Jonathan Adams
 
Director, Evidence Ltd
+ 44 113 384 5680

     Comment on: "Bibliometrics could distort research assessment"
     Guardian Education, Friday  9 November 2007
     http://education.guardian.co.uk/RAE/story/0,,2207678,00.html

Yes, any system (including democracy, health care, welfare, taxation,
market economics, justice, education and the Internet) can be abused.
But
abuses can be detected, exposed and punished, and this is especially
true in the case of scholarly/scientific research, where "peer review"
does not stop with publication, but continues for as long as research
findings are read and used. And it's truer still if it is all online and
openly accessible.

The researcher who thinks his research impact can be spuriously enhanced
by producing many small, "salami-sliced" publications instead of fewer
substantial ones will stand out against peers who publish fewer, more
substantial papers. Paper lengths and numbers are metrics too, hence
they too can be part of the metric equation. And if most or all peers do
salami-slicing, then it becomes a scale factor that can be factored out
(and the metric equation and its payoffs can be adjusted to discourage
it).

Citations inflated by self-citations or co-author group citations can
also be detected and weighted accordingly. Robotically inflated download
metrics are also detectable, nameable and shameable. Plagiarism is
detectable too, when all full-text content is accessible online.

The important thing is to get all these publications as well as their
metrics out in the open for scrutiny by making them Open Access. Then
peer and public scrutiny -- plus the analytic power of the algorithms
and the Internet -- can collaborate to keep them honest.



More information about the SIGMETRICS mailing list