[Sigmetrics] Bibliometrics in Google Scholar Citations, ResearcherID, ResearchGate, Mendeley, Twitter: The counting house, measuring those that count
Emilio Delgado López-Cózar
edelgado at ugr.es
Thu Jan 21 05:02:52 EST 2016
It is a pleasure for me to announce the publication of a new working
paper: “The counting house, measuring those who count: Presence of
Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics
in GSC (Google Scholar Citations), ResearcherID, ResearchGate, Mendeley,
& Twitter”
You may access it from the following link:
http://doi.org/10.13140/RG.2.1.4814.4402
This working paper offers a comparative analysis of 31 indicators
obtained from the GSC, ResearcherID, ResearchGate, Mendeley, & Twitter
profiles of 814 researchers working in Bibliometrics.
Of all the results discussed in the paper, I’d like to highlight the
following ones:
- We found that it is feasible to depict an accurate representation of
the current state of the Bibliometrics community using data from GSC
(the most influential authors, documents, journals, and publishers).
- GSC is the most used platform, followed at a distance by ResearchGate
(543 authors), which is currently growing at a vertiginous speed. The
number of Mendeley profiles is high, although this data by itself is
misleading, since 17.1% of the profiles are basically empty.
ResearcherID is also affected by this issue (34.45% of the profiles are
empty); as is Twitter (47% of the 240 authors with a Twitter profile
have published less than 100 tweets).
- All metrics provided by Google Scholar correlate strongly among
themselves. GS is also the platform that achieves the highest
correlation to the indicators in other platforms (especially the total
citations and the h-index indicator), except to those related to online
social networking (most twitter indicators, and the followers and
following indicators from ResearchGate and Mendeley).
- The total number of citations that authors in our sample have
received according to GSC is on average 4.5 higher than the number of
citations he/she receives according to ReseacherID, and 2.6 times higher
than the same figure in ResearchGate. This average is based on the
median value, not the mean, which yields even higher differences.
- Indicators from ResearcherID strongly correlate among themselves, but
are slightly separated from other citation metrics (those from Google
Scholar and ResearchGate). This issue can probably be explained by the
low regularity with which ResearcherID profiles are updated.
-Regarding ResearchGate, we find a clear separation between the usage
(views and downloads) and citation metrics (Citations, Impact Points).
Indicators from ResearchGate achieve moderate to high correlations among
themselves, except for the networking indicators (followers), which
don’t correlate well with the rest. The RG Score, a proprietary
indicator for which no method of calculation has been disclosed,
displays a good correlation to the rest of citation-based indicators,
especially to the ones available in GSC.
- Regarding Mendeley, their “Readers” indicator and the total number of
publications achieves a high correlation (0.83). Additionally, the
Mendeley “Readers” indicator correlates to the usage metrics offered by
ResearchGate (especially the RG Score), and strongly correlates to
Google’s total citations, h-index.
- All metrics related to the number of followers (all Twitter metrics
and their counterparts in ResearchGate and Mendeley) correlate among
themselves, and are separated from the citation metrics.
To summarize, we found two kinds of impact on the Web: first, all
metrics related to academic impact, and second, all metrics associated
with connectivity and popularity (followers). The first group can
further be divided into usage metrics (views and downloads) and citation
metrics. The correlation among them is very high, especially between all
metrics from Google Scholar, the RG Score (ResearchGate), and the
Readers indicator in Mendeley.
Lastly, we present a taxonomy of all the errors that may affect the
reliability of the data contained in each of these platforms, with a
special emphasis in GSC, since it has been our main source of data.
These errors alert us to the danger of blindly using any of these
platforms for the assessment of individuals, without verifying the
veracity and exhaustiveness of the data.
I hope this paper is of interest to you, and of course, your feedback
would be much appreciated.
Thank you, and I hope you find it of interest,
Emilio Delgado López-Cózar
EC3 Research Group: Evaluación de la Ciencia y de la Comunicación
Científica
Universidad de Granada
http://scholar.google.com/citations?hl=es&user=kyTHOh0AAAAJ
https://www.researchgate.net/profile/Emilio_Delgado_Lopez-Cozar
Dubitando ad veritatem pervenimus (Cicerón, De officiis. A. 451...)
Contra data non argumenta
A fructibus eorum cognoscitis eos (San Mateo 7, 16
More information about the SIGMETRICS
mailing list