letter to Physics Today

Stevan Harnad harnad at ECS.SOTON.AC.UK
Thu Sep 2 14:05:33 EDT 2004

Some comments about the excerpts from Physics Today letters:

> http://www.physicstoday.org/vol-57/iss-9/p11.html
> >Impact, though, has nothing to do with competence. Rating the impact of a
> >journal is a different task from rating the competence of an individual.

Two ideas seem to be conflated in one here:

(i) Journal impact is definitely not the same as individual impact.
(ii) And impact is definitely not the same as competence (or quality).

But they are correlated: Journal impact is correlated with
individual competence and quality, and individual paper or author
impact is even more strongly correlated with competence and quality.

> >The effects of the competence or incompetence of individual papers average
> >out to produce a greater or lesser reputation for a given journal. As the
> >journal matures, its reputation stabilizes, and can even improve.

This is of course about the noisiness of the journal impact factor, which is the
average of its papers' impacts. Of course individual impacts are better predictors
of individual quality and importance than just impact of the journal in which
they publish, that being an average of the impacts of the other articles published

But I would not put my hand in the fire that if we looked objectively,
in the form of a bivariate regression equation, at the respective
amounts of variance in a criterion variable (such as, say, probability
of prizes, peer-rated importance, co-citation-based importance, or even
position, institution, grant-revenue or salary) that are predicted by (1)
individual citation count and (2) journal impact factor, that (2) would
not contribute a bit of independent predictive power to the equation,
over and above (1). But (1) would certainly the stronger predictor.

> >The impact of a young scientist is not a sensible concept

This is repeating the almost tautologous fact that it takes a while for
a beginning researcher's work to make an impact! This is of course true.
Fortunately, there are now early-days indicators of impact, over and above
citation counts, and these include usage impact (downloads), which Tim Brody
has shown to be correlated with citations 1-2 years down the line:

> >Citations can be given in a prejudicial fashion.... citation cartels

True. But fortunately, the online medium also makes it possible
(and increasingly easy) to detect such patterns of self-serving
citation-cartels, using clever algorithms that compare endogamy rates against
other factors. OA will soon mean hard times for such tricks, just as it
means hard times for plagiarism and priority fudging....

> >These factors, combined with the sheer volume of published work, can
> >prevent even first-rate work from being noticed. In such an atmosphere,
> >only written evaluations by those who have read the candidate's work can
> >be taken as formal indicators of competence.

When all else fails, one can have work hand re-reviewed by peers. But one hopes
that journal peer review the first time, together with the established quality
standards of the peer-reviewed journals -- supplemented now by individual
citation counts, download counts, co-citation patterns and other new
objective scientometric measures -- will take some of the load off individual
human re-evaluation.

One hopes, though, that assessment will not be *all* just
scientometric. Algorithms will fall short of human judgment until
cognitive science has discovered the algorithms underling human cognition!

> >But this approach runs
> >head-on against another problem... Hak: profligate
> >coauthorship. Exactly whose work is to be evaluated? Someone can easily be
> >a coauthor of a well-cited paper to which he or she has contributed little
> >insight. How do we know for sure whose impact is being factored?

Again, in a 100% OA citation-linked corpus, algorithms can easily test
whether a co-author's weight is borne out by his other work!

Some of these views in Physics Today seem a bit vieux-jeu, partly based
still on paper-based thinking, partly the old worries about scientometric
("articles cited a lot because they're bad"...: all these will in fact
be correctable algorithmically once the corpus is there and the good
heads get to work on the designing the algorithms!).

Stevan Harnad

More information about the SIGMETRICS mailing list