The Wisdom of Citing Scientists

Bornmann, Lutz lutz.bornmann at GV.MPG.DE
Sat Aug 10 00:33:47 EDT 2013


These are important aspects, Neil: there are a lot of influencing factors on citing behavior (see an overview in our paper "what do citation counts measure?" published in 2008). One should be aware of these aspects when doing a citation analysis.

However, the most important point in this new paper is that citations (in the form of cited references) say more about the citing unit than citations about the cited unit. To increase the meaningfulness of citations, they should be analyzed as cited references in order to make a point on the citing publication set.

The fruitfulness of cited reference analysis becomes also visible by its successful use in the citing side normalization (see http://arxiv.org/abs/1301.4941). Here, the normalization of a citation is based on its cited references (or the mean number of cited references in the journal where the citing paper was published). In other words, the different citing behavior in the fields is caught by the number of cited references.

Lutz

Von meinem iPad gesendet

Am 09.08.2013 um 18:45 schrieb "Smalheiser, Neil" <Nsmalheiser at PSYCH.UIC.EDU<mailto:Nsmalheiser at PSYCH.UIC.EDU>>:

Since Katy covered one aspect of this issue, let me raise a complementary aspect that I have not seen discussed yet in this forum.
When people DO cite references in a paper, they do so possibly for very different reasons, each with a different rationale and pattern of citing.

1.       Ideally, in my opinion, an author should accurately cite the previous works that influenced them in the research that they are reporting. A research paper tells a story, and it is important to know what papers they read, and when, and how they were influenced. So if they were unaware of some relevant research at the time, it is not important (and even intellectually misleading) to cite it!

2.       Another reason that authors omit citations is on purpose – they wish to make their own contribution seem new and fresh, and even if they were aware of some prior relevant work, they may find some excuse not to cite it [e.g. it was done in Drosophila but my study is in rats].

3.     More often, authors attempt to identify all relevant prior research, in a prospective attempt to satisfy reviewers who are likely to give them a hard time if they don’t. Some authors even do this out of scholarliness, though that is not a particularly valued attribute in experimental science. As review articles appear on a given topic, it is often acceptable to simply cite one or two reviews which hides the impact of the primary papers (except for those that are most closely relevant to the present article, regardless of their impact to the field at large). This also means that papers will preferentially cite the most similar prior papers.

4.     Even more often, authors go out of their way to cite papers by potential reviewers or editorial board members of the journal that is considering the paper, or folks likely to be reviewing their grants.

5.     A subtle variation of this is that an author will want to cite papers that appeared in prestigious journals, and avoid papers that were published in obscure or questionable places, to make their own paper look more classy and more likely to be reviewed favorably.

6.     Some papers, particularly methods papers or famous papers, are almost pop references that provide bonding between author and reader. Citing the Watson-Crick double-helix paper (or the Mullis PCR method paper) is not just citing that paper, but is really a nod to a lot of related connotations and historical associations. These papers are highly cited because they are celebrities (famous for being famous), which does reflect impact but of a different sort.
So counting citations to measure impact is like characterizing a person’s health by heart rate – it means something; it is important for sure; but you need to know a lot more to interpret it properly.

Neil
From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Katy Borner
Sent: Friday, August 09, 2013 8:29 AM
To: SIGMETRICS at LISTSERV.UTK.EDU<mailto:SIGMETRICS at LISTSERV.UTK.EDU>
Subject: Re: [SIGMETRICS] The Wisdom of Citing Scientists

Good discussion. Quick comment:

Work by Bollen et al. shows that science maps generated from download (click stream) data have a substantially enlarged medical area. Medical papers, e.g., freely available via Medline, are downloaded/read/used widely by practitioners/doctors interested to improve health/save lives. However, these practitioners/doctors might not necessarily produce papers with citation references.

Ideally, 'research evaluation' should aim to capture output and outcomes.

Many of us spent a substantial amount of our time training others, developing educational materials, in administration, or improving legal regulations. Research Networking systems like VIVO and others, see http://nrn.cns.iu.edu, provide access to more holistic data (papers, grants, courses; some systems are connected to even more detailed annual faculty report data) on scholar's roles in the S&T system--as researchers, mentors, administrators.
k

  *   http://scimaps.org/maps/map/a_clickstream_map_of_83/
  *   Bollen, Johan, Lyudmila Balakireva, Luís Bettencourt, Ryan Chute, Aric Hagberg, Marko A. Rodriguez, and Herbert Van de Sompel. 2009. “Clickstream Data Yields High-Resolution Maps of Science.” PLoS One 4 (3): 1-11.

On 8/9/2013 3:22 AM, Bornmann, Lutz wrote:
The Wisdom of Citing Scientists
Lutz Bornmann<http://arxiv.org/find/cs/1/au:+Bornmann_L/0/1/0/all/0/1>, Werner Marx<http://arxiv.org/find/cs/1/au:+Marx_W/0/1/0/all/0/1>
(Submitted on 7 Aug 2013)

This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be ascribed a certain amount of accuracy. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlate with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say (strictly speaking) that it was obviously of little use for further research. Using a comparison with 'like' data, we try to determine that cited reference analysis allows a more meaningful analysis of bibliometric data than times-cited analysis.

URL: http://arxiv.org/abs/1308.1554

---------------------------------------

Dr. Dr. habil. Lutz Bornmann
Division for Science and Innovation Studies
Administrative Headquarters of the Max Planck Society
Hofgartenstr. 8
80539 Munich
Tel.: +49 89 2108 1265
Mobil: +49 170 9183667
Email: bornmann at gv.mpg.de<mailto:bornmann at gv.mpg.de>
WWW: www.lutz-bornmann.de<http://www.lutz-bornmann.de/>
ResearcherID: http://www.researcherid.com/rid/A-3926-2008




--

Katy Borner

Victor H. Yngve Professor of Information Science

Director, CI for Network Science Center, http://cns.iu.edu

Curator, Mapping Science exhibit, http://scimaps.org



ILS, School of Informatics and Computing, Indiana University

Wells Library 021, 1320 E. Tenth Street, Bloomington, IN 47405, USA

Phone: (812) 855-3256  Fax: -6166
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20130810/dde3709e/attachment.html>


More information about the SIGMETRICS mailing list