[Sigmetrics] SIGMETRICS Digest, Vol 23, Issue 8 - New paper (Bornmann, Lutz)

Isabelle Dorsch Isabelle.Dorsch at uni-duesseldorf.de
Fri Jun 30 05:25:13 EDT 2017


Dear Lutz Bornmann, 

of course, this is an interesting research topic. I examined publication
lists in databases (like WoS and Scopus) and personal publication lists
by the authors themselves:

https://link.springer.com/article/10.1007/s11192-017-2416-9

RELATIVE VISIBILITY OF AUTHORS’ PUBLICATIONS IN DIFFERENT INFORMATION
SERVICES

Publication hit lists of authors, institutes, scientific disciplines
etc. within scientific databases like Web of Science or Scopus are often
used as a basis for scientometric analyses and evaluations of these
authors, institutes etc. However, such information services do not
necessarily cover all publications of an author. The purpose of this
article is to introduce a re-interpreted scientometric indicator called
"visibility," which is the share of the number of an author's
publications on a certain information service relative to the author's
entire œuvre based upon his/her probably complete personal publication
list. To demonstrate how the indicator works, scientific publications
(from 2001 to 2015) of the information scientists Blaise Cronin (_N_ =
167) and Wolfgang G. Stock (_N_ = 152) were collected and compared with
their publication counts in the scientific information services ACM,
ECONIS, Google Scholar, IEEE Xplore, Infodata eDepot, LISTA, Scopus, and
Web of Science, as well as the social media services Mendeley and
ResearchGate. For almost all information services, the visibility
amounts to less than 50%. The introduced indicator represents a more
realistic view of an author's visibility in databases than the currently
applied absolute number of hits in those databases. 

> It would be definitely interesting to study empirically the quality of
> available publications lists. However, it is best practice in
> bibliometrics that publication lists of single researchers which are
> used for research evaluation purposes are validated by the researchers
> themselves. Thus, I expect higher quality lists from databases for
> which I know that researchers have produced/ controlled their lists.

Kind regards,
Isabelle Dorsch 

Am 2017-06-29 14:56, schrieb sigmetrics-request at asist.org: 

> Send SIGMETRICS mailing list submissions to
> sigmetrics at mail.asis.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mail.asis.org/mailman/listinfo/sigmetrics
> or, via email, send a message with subject or body 'help' to
> sigmetrics-request at mail.asis.org
> 
> You can reach the person managing the list at
> sigmetrics-owner at mail.asis.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of SIGMETRICS digest..."
> 
> Today's Topics:
> 
> 1. Re: New paper (William Gunn)
> 2. Re: New paper (Bornmann, Lutz)
> 3. Re: New paper (William Gunn)
> 4. "Classic papers" a step further in the bibliometric
> exploitation of Google Scholar (Emilio Delgado L?pez-C?zar)
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 26 Jun 2017 11:37:16 -0700
> From: William Gunn <william.gunn at gmail.com>
> To: "Bornmann, Lutz" <lutz.bornmann at gv.mpg.de>
> Cc: "SCISIP at listserv.nsf.gov" <SCISIP at listserv.nsf.gov>,    Richard
> Williams <Richard.A.Williams.5 at nd.edu>,    "sigmetrics at mail.asis.org"
> <sigmetrics at mail.asis.org>
> Subject: Re: [Sigmetrics] New paper
> Message-ID:
> <CAAY7FqHwDPCUCe1ruYcLnn4F5zvDcMwEMk_Eeed0ZaF_NUj=TA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Please see my comments below.
> 
> On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz <lutz.bornmann at gv.mpg.de>
> wrote:
> 
>> Comment by William: But I don't see any evidence for the assertion that
>> the lists will probably be more reliable. I'm asking because it seems
>> rather counterintuitive that an automatically generated list that can be
>> edited by an author would be better than a list manually created by an
>> author. Indeed, at Mendeley we have author profiles that are manually
>> created & we're moving to automatically adding publications to them, using
>> Scopus, because the lists are often incomplete.
>> 
>> Answer: The problem is that many Scopus profiles are not edited by the
>> authors. In my opinion, it would be helpful if Elsevier provides the
>> information whether a publication list had been manually (and continuously)
>> edited or not.
> Thanks for the response, but I'm asking what evidence there is that a
> collection of manually created profiles will be more accurate than an
> automatically generated one. Errors do exist in automatically generated
> profiles, but they also exist in manually created ones. The question is
> which has more errors per profile, and at the level of the entire
> collection, which are more complete and correct. It seems like you're
> assuming that manually created ones will be both more complete and correct,
> whereas at Mendeley we have evidence that that's not a valid assumption.
> Therefore, any evidence you have to justify your assumption would be
> appreciated.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://mail.asis.org/pipermail/sigmetrics/attachments/20170626/5deeface/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 2
> Date: Tue, 27 Jun 2017 04:16:18 +0000
> From: "Bornmann, Lutz" <lutz.bornmann at gv.mpg.de>
> To: William Gunn <william.gunn at gmail.com>
> Cc: "SCISIP at listserv.nsf.gov" <SCISIP at listserv.nsf.gov>,    Richard
> Williams <Richard.A.Williams.5 at nd.edu>,    "sigmetrics at mail.asis.org"
> <sigmetrics at mail.asis.org>
> Subject: Re: [Sigmetrics] New paper
> Message-ID: <261BBCFC-D542-4865-89A0-0712D008E159 at gv.mpg.de>
> Content-Type: text/plain; charset="us-ascii"
> 
> It would be definitely interesting to study empirically the quality of
> available publications lists. However, it is best practice in
> bibliometrics that publication lists of single researchers which are
> used for research evaluation purposes are validated by the researchers
> themselves. Thus, I expect higher quality lists from databases for
> which I know that researchers have produced/ controlled their lists.
> 
> Von meinem iPad gesendet
> 
> Am 26.06.2017 um 20:37 schrieb William Gunn
> <william.gunn at gmail.com<mailto:william.gunn at gmail.com>>:
> 
> Please see my comments below.
> 
> On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz
> <lutz.bornmann at gv.mpg.de<mailto:lutz.bornmann at gv.mpg.de>> wrote:
> Comment by William: But I don't see any evidence for the assertion
> that the lists will probably be more reliable. I'm asking because it
> seems rather counterintuitive that an automatically generated list
> that can be edited by an author would be better than a list manually
> created by an author. Indeed, at Mendeley we have author profiles that
> are manually created & we're moving to automatically adding
> publications to them, using Scopus, because the lists are often
> incomplete.
> 
> Answer: The problem is that many Scopus profiles are not edited by the
> authors. In my opinion, it would be helpful if Elsevier provides the
> information whether a publication list had been manually (and
> continuously) edited or not.
> 
> Thanks for the response, but I'm asking what evidence there is that a
> collection of manually created profiles will be more accurate than an
> automatically generated one. Errors do exist in automatically
> generated profiles, but they also exist in manually created ones. The
> question is which has more errors per profile, and at the level of the
> entire collection, which are more complete and correct. It seems like
> you're assuming that manually created ones will be both more complete
> and correct, whereas at Mendeley we have evidence that that's not a
> valid assumption. Therefore, any evidence you have to justify your
> assumption would be appreciated.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://mail.asis.org/pipermail/sigmetrics/attachments/20170627/611efed0/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 3
> Date: Mon, 26 Jun 2017 21:20:10 -0700
> From: William Gunn <william.gunn at gmail.com>
> To: "Bornmann, Lutz" <lutz.bornmann at gv.mpg.de>
> Cc: "SCISIP at listserv.nsf.gov" <SCISIP at listserv.nsf.gov>,    Richard
> Williams <Richard.A.Williams.5 at nd.edu>,    "sigmetrics at mail.asis.org"
> <sigmetrics at mail.asis.org>
> Subject: Re: [Sigmetrics] New paper
> Message-ID:
> <CAAY7FqEkNo4CucDjt5Jw2Yk_9inYzNkkjgY-Fkcx58hYVRjNmA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Curious how it became a best practice without empirical evidence to
> recommend it, but nevertheless, I think you've given me a great idea for a
> research project ?
> 
> William Gunn
> +1 (650) 614-1749
> http://synthesis.williamgunn.org/about/
> 
> On Jun 26, 2017 9:16 PM, "Bornmann, Lutz" <lutz.bornmann at gv.mpg.de> wrote:
> 
> It would be definitely interesting to study empirically the quality of
> available publications lists. However, it is best practice in bibliometrics
> that publication lists of single researchers which are used for research
> evaluation purposes are validated by the researchers themselves. Thus, I
> expect higher quality lists from databases for which I know that
> researchers have produced/ controlled their lists.
> 
> Von meinem iPad gesendet
> 
> Am 26.06.2017 um 20:37 schrieb William Gunn <william.gunn at gmail.com>:
> 
> Please see my comments below.
> 
> On Sun, Jun 25, 2017 at 11:37 PM, Bornmann, Lutz <lutz.bornmann at gv.mpg.de>
> wrote:
> 
>> Comment by William: But I don't see any evidence for the assertion that
>> the lists will probably be more reliable. I'm asking because it seems
>> rather counterintuitive that an automatically generated list that can be
>> edited by an author would be better than a list manually created by an
>> author. Indeed, at Mendeley we have author profiles that are manually
>> created & we're moving to automatically adding publications to them, using
>> Scopus, because the lists are often incomplete.
>> 
>> Answer: The problem is that many Scopus profiles are not edited by the
>> authors. In my opinion, it would be helpful if Elsevier provides the
>> information whether a publication list had been manually (and continuously)
>> edited or not.
> Thanks for the response, but I'm asking what evidence there is that a
> collection of manually created profiles will be more accurate than an
> automatically generated one. Errors do exist in automatically generated
> profiles, but they also exist in manually created ones. The question is
> which has more errors per profile, and at the level of the entire
> collection, which are more complete and correct. It seems like you're
> assuming that manually created ones will be both more complete and correct,
> whereas at Mendeley we have evidence that that's not a valid assumption.
> Therefore, any evidence you have to justify your assumption would be
> appreciated.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://mail.asis.org/pipermail/sigmetrics/attachments/20170626/0319ae9d/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 4
> Date: Thu, 29 Jun 2017 14:56:01 +0200
> From: Emilio Delgado L?pez-C?zar <edelgado at ugr.es>
> To: <sigmetrics at mail.asis.org>
> Subject: [Sigmetrics] "Classic papers" a step further in the
> bibliometric exploitation of Google Scholar
> Message-ID: <e2c628449809c182ba9ec526597ba55d at ugr.es>
> Content-Type: text/plain; charset="utf-8"
> 
> Dear colleagues,
> 
> Google Scholar has recently launched a new product
> called "Classic Papers". This product displays the top 10 most cited
> English-language articles published in 2006 in 252 subject categories
> assigned by them. The total number of items shown is 2515 items.
> 
> After
> giving a brief overview of Eugene Garfield's contributions to the issue
> of identifying and studying the most cited scientific articles,
> manifested in the creation of his Citation Classics, the main
> characteristics and features of this new service, as well as its main
> strengths and weaknesses, are addressed. You may access it
> from:
> 
> https://doi.org/10.13140/RG.2.2.35729.22880/1
> 
> I hope you find it
> of interest.
> 
> Kind regards
> 
> Emilio Delgado L?pez-C?zar 
> 
> Facultad de
> Comunicaci?n y Documentaci?n
> Universidad de
> Granada
> http://scholar.google.com/citations?hl=es&user=kyTHOh0AAAAJ
> https://www.researchgate.net/profile/Emilio_Delgado_Lopez-Cozar
> http://googlescholardigest.blogspot.com.es
> 
> Dubitando
> ad veritatem pervenimus (Cicer?n, De officiis. A. 451...)
> Contra facta
> non argumenta
> A fructibus eorum cognoscitis eos (San Mateo 7, 16)
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://mail.asis.org/pipermail/sigmetrics/attachments/20170629/f73f148a/attachment.html>
> 
> ------------------------------
> 
> Subject: Digest Footer
> 
> _______________________________________________
> SIGMETRICS mailing list
> SIGMETRICS at mail.asis.org
> http://mail.asis.org/mailman/listinfo/sigmetrics
> 
> ------------------------------
> 
> End of SIGMETRICS Digest, Vol 23, Issue 8
> *****************************************

-- 
Isabelle Dorsch, B.A.
Dept. of Information Science
Heinrich Heine University Düsseldorf

Bldg 24.53, Level 01, Room 87
Universitätsstraße 1
D-40225 Düsseldorf, Germany
Tel. +49 211 81-10803
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20170630/ed684799/attachment-0002.html>


More information about the SIGMETRICS mailing list