From garfield at CODEX.CIS.UPENN.EDU Thu Nov 1 14:31:27 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 1 Nov 2007 14:31:27 -0400 Subject: Haas, E; Wilson, GY; Cobb, CD; Hyle, AE; Jordan, K; Kearney, KS " Assessing influence on the field: An analysis of citations to educational administration quarterly, 1979-2003. Educ. Admin. Quarterly 43(4): 494-513, Oct. 2007 Message-ID: Eric Haas : eric.haas at uconn.edu TITLE: Assessing influence on the field: An analysis of citations to educational administration quarterly, 1979- 2003 (Article, English) AUTHOR: Haas, E; Wilson, GY; Cobb, CD; Hyle, AE; Jordan, K; Kearney, KS SOURCE: EDUCATIONAL ADMINISTRATION QUARTERLY 43 (4). OCT 2007. p.494-513 CORWIN PRESS INC A SAGE PUBLICATIONS CO, THOUSAND OAKS ABSTRACT: Study Purpose: This article examines the influence of Educational Administration Quarterly (EAQ) on the scholarly literature in education during the 25-year period 1979 to 2003. This article continues part of the first critique of EAQ conducted by Roald Campbell in 1979. Study Methods: Two citation measures are used in this study to assess EAQ influence: (a) citation frequency, the total citations counts to EAQ articles found in the Web of Science database and (b) the impact factor, a ratio of citations to articles published that is calculated as part of the Journal Citation Reports. Study Findings: The findings point to three conclusions: (a) EAQ's substantive, ongoing influence on the scholarly education literature is limited to a small percentage of its published articles, which are cited predominantly by subsequent articles in EAQ; (b) this level of influence, though perhaps not the form, appears to be generally comparable to the level of other scholarly education journals with a solid academic reputation; and (c) EAQ appears to be statistically among the top tier of influential scholarly journals in education, but below the most influential. Overall, EAQ's influence on the scholarly education literature has improved since the first critique published in 1979. AUTHOR ADDRESS: E Haas, Univ Connecticut, Ctr Educ Policy Anal, Storrs, CT 06269 USA From loet at LEYDESDORFF.NET Fri Nov 2 04:35:41 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 2 Nov 2007 09:35:41 +0100 Subject: Communications Network Analysis Message-ID: Dear colleagues, As of October 1, Wouter de Nooy has joined us at the Amsterdam School of Communications Research (ASCoR) of the University of Amsterdam. As a light form of coordination between our activities we opened an email list entitled Communications Network Analysis-University of Amsterdam to which one can subscribe and unsubscribe at one's own discretion at http://listserv.surfnet.nl/archives/cna-uva.html. This list is not meant to disrupt any ongoing communication at the level of this list, but to serve us as a communication medium about local activities (for example, with PhD students) and issues which are specific for communication networks as potentially different from social networks. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net From toshev at CHEM.UNI-SOFIA.BG Sun Nov 4 10:26:40 2007 From: toshev at CHEM.UNI-SOFIA.BG (=?ISO-8859-1?Q?B.V._Toshev?=) Date: Sun, 4 Nov 2007 10:26:40 -0500 Subject: Paper: Scientific Activity in Higher Education: Personal and Institutional Assessment Message-ID: B.V. Toshev. Scientific Activity in Higher Education: Personal and Institutional Assessment. BJSEP 1, 35-42 (2007)[In Bulgarian] Abstract. Education and research belong together - this expresses the very university idea. Research should provide new scientific results. These should be published. The system of the scholarly journals is organized in two levels. The first level includes both the primary research journals and scholarly journals with more expanded audience. The second level includes the secondary research journals. A few Bulgarian journals are presented there. Citation analysis is of an importance mainly because it is heavily used in science policy and research evaluation professionals. The most popular indicators in such considerations are the impact factor (IF), the immediacy index (II) and the response time (t1). The meaning and application of these paramaters are explained. The incorrect use of the impact factor in the Bulgarian evaluation practice is mentioned. New indicators that would characterize quantitatively the scientific achievements of the researchers are proposed: efficiency e=nk, n and k - number of author's publications and their citations, respectively, and a personal impact factor PIF=q/m where q is the number of citations in a given year of m author's publications, published in two previous years. The problem of the assessment of higher education institutions is considered in detail. The change of three indicators at least should be identified during the years. These are: S=Q/P (Q stands for the number of citations for a given year for the whole faculty N, which publishes in the year in question P publications), L/M and M/N, M,L - number of prospective/graduated students for a given school year. Keywords: scholarly journals, scientometrics, personal assessment of researchers, institutional evaluation of higher education References: 12 Author's E-Mail: toshev at chem.uni-sofia.bg ============================ Bulgarian Journal of Science and Education Policy (BJSEP), Volume 1, 1-275 (2007) ISSN 1313-1958 Publisher: St. Kliment Ohridski University Press From van at EMSE.FR Mon Nov 5 12:04:53 2007 From: van at EMSE.FR (T VAN) Date: Mon, 5 Nov 2007 18:04:53 +0100 Subject: Softwares to find co-references of 2 articles? Message-ID: Hello everybody, I have to find bibliographic coupling similarity between 2 articles by computing the number of co-references between them. The lists of references of the articles are available in text files. An exact match approach (like grep command in Unix) may lose many co-references (for example, if there is a very little difference between references of 2 articles ). Therefore I'm looking for a software (or algorithm) that can do this task better. Do you have any suggestion? Thank you very much, T. Van From loet at LEYDESDORFF.NET Mon Nov 5 14:12:26 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 5 Nov 2007 20:12:26 +0100 Subject: Softwares to find co-references of 2 articles? In-Reply-To: <472F4D35.5060808@emse.fr> Message-ID: Dear T. Van, You may wish to try at http://www.leydesdorff.net/software/bibcoupl/index.htm whether my program can do this job for you. Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of T VAN > Sent: Monday, November 05, 2007 6:05 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Softwares to find co-references of 2 articles? > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Hello everybody, > > I have to find bibliographic coupling similarity between 2 > articles by > computing the number of co-references between them. The lists of > references of the articles are available in text files. An > exact match > approach (like grep command in Unix) may lose many co-references (for > example, if there is a very little difference between references of 2 > articles ). Therefore I'm looking for a software (or algorithm) that > can do this task better. > > Do you have any suggestion? > > Thank you very much, > T. Van > From eugene.garfield at THOMSON.COM Tue Nov 6 00:26:44 2007 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Tue, 6 Nov 2007 00:26:44 -0500 Subject: Softwares to find co-references of 2 articles? To detect plagiarism In-Reply-To: <005001c81fdf$d2422f20$6502a8c0@loet> Message-ID: If you look up each of the two articles in the WebofScience and store the complete Source record in a HistCite collection you would determine the degree of bibliographic coupling quite easily. Each of the co-Cited References would have a citation frequency of 2. As you indicate, variations in the format of the cited references may be a problem but the HistCite software simplifies the task of unifying the variants. You can test this out by going to www.histcite.com Plagiarism comes in a variety of forms including citation amnesia which I discussed many years ago in Current Contents >From Citation Amnesia to Bibliographic Plagiarism http://www.garfield.library.upenn.edu/essays/v4p503y1979-80.pdf A pure case of grand larceny plagiarism is identified by a perfect match in the set of references cited in the original work as well as the plagiarized work. Best wishes. Eugene Garfield > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of T VAN > Sent: Monday, November 05, 2007 6:05 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Softwares to find co-references of 2 articles? > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Hello everybody, > > I have to find bibliographic coupling similarity between 2 > articles by > computing the number of co-references between them. The lists of > references of the articles are available in text files. An > exact match > approach (like grep command in Unix) may lose many co-references (for > example, if there is a very little difference between references of 2 > articles ). Therefore I'm looking for a software (or algorithm) that > can do this task better. > > Do you have any suggestion? > > Thank you very much, > T. Van > No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 269.15.17/1103 - Release Date: 11/1/2007 6:01 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 269.15.17/1103 - Release Date: 11/1/2007 6:01 AM From zielinskic at WHO.INT Tue Nov 6 02:31:02 2007 From: zielinskic at WHO.INT (Zielinski, Christopher) Date: Tue, 6 Nov 2007 08:31:02 +0100 Subject: Softwares to find co-references of 2 articles? To detect plagiarism In-Reply-To: A<311174B69873F148881A743FCF1EE53703D4A1C1@TSHUSPAPHIMBX02.ERF.THOMSON.COM> Message-ID: Gene, Plagiarism-indicating conceptual identities can be found by using conceptual search software such as Autonomy, Collexis and Semio. In unreported testing efforts, working with Collexis on abstracts of papers (not the whole paper) included in MEDLINE, we found that matching/identities of concepts over 50% generally indicated plagiarism, while anything over 60% was most likely what you call "grand larceny" plagiarism. Best, Chris Chris Zielinski Consultant, World Health Organization Avenue Appia, CH1211, Geneva Switzerland Tel: Mobile: +41-795-183035 e-mail: zielinskic at who.int -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Eugene Garfield Sent: 06 November 2007 06:27 To: SIGMETRICS at listserv.utk.edu Subject: Re: [SIGMETRICS] Softwares to find co-references of 2 articles? To detect plagiarism If you look up each of the two articles in the WebofScience and store the complete Source record in a HistCite collection you would determine the degree of bibliographic coupling quite easily. Each of the co-Cited References would have a citation frequency of 2. As you indicate, variations in the format of the cited references may be a problem but the HistCite software simplifies the task of unifying the variants. You can test this out by going to www.histcite.com Plagiarism comes in a variety of forms including citation amnesia which I discussed many years ago in Current Contents >From Citation Amnesia to Bibliographic Plagiarism http://www.garfield.library.upenn.edu/essays/v4p503y1979-80.pdf A pure case of grand larceny plagiarism is identified by a perfect match in the set of references cited in the original work as well as the plagiarized work. Best wishes. Eugene Garfield > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of T VAN > Sent: Monday, November 05, 2007 6:05 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Softwares to find co-references of 2 articles? > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Hello everybody, > > I have to find bibliographic coupling similarity between 2 > articles by > computing the number of co-references between them. The lists of > references of the articles are available in text files. An > exact match > approach (like grep command in Unix) may lose many co-references (for > example, if there is a very little difference between references of 2 > articles ). Therefore I'm looking for a software (or algorithm) that > can do this task better. > > Do you have any suggestion? > > Thank you very much, > T. Van > No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 269.15.17/1103 - Release Date: 11/1/2007 6:01 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 269.15.17/1103 - Release Date: 11/1/2007 6:01 AM From harnad at ECS.SOTON.AC.UK Wed Nov 7 20:53:51 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 8 Nov 2007 01:53:51 +0000 Subject: UUK report looks at the use of bibliometrics In-Reply-To: <313201438A789D4A87C7EAE3BE51ED0D2934E1@merlin.ecs.soton.ac.uk> Message-ID: > From: UNIVERSITIES UK PRESSOFFICES > EMBARGO 00.01hrs 8 November 2007 > "This report will help Universities UK to formulate its position on the > development of the new framework for replacing the RAE after 2008." > Some of the points for consideration in the report include: > * Bibliometrics are probably the most useful of a > number of variables that could feasibly be used to measure research > performance. What metrics count as "bibliometrics"? Do downloads? hubs/authorities? Interdisciplinarity metrics? Endogamy/exogamy metrics? chronometrics, semiometrics? > * There is evidence that bibliometric indices do > correlate with other, quasi-independent measures of research quality - > such as RAE grades - across a range of fields in science and > engineering. Meaning that citation counts correlate with panel rankings in all disciplines tested so far. Correct. > * There is a range of bibliometric variables as > possible quality indicators. There are strong arguments against the use > of (i) output volume (ii) citation volume (iii) journal impact and (iv) > frequency of uncited papers. The "strong" arguments are against using any of these variables alone, or without testing and validation. They are not arguments against including them in the battery of candidate metrics to be tested, validated and weighted against the panel rankings, discipline by discipline, in a multiple regression equation. > * 'Citations per paper' is a widely accepted index > in international evaluation. Highly-cited papers are recognised as > identifying exceptional research activity. Citations per paper is one (strong) candidate metric among many, all of which should be co-tested, via multiple regression analysis, against the parallel RAE panel rankings (and other validated or face-valid performance measures). > * Accuracy and appropriateness of citation counts > are a critical factor. Not clear what this means. ISI citation counts should be supplemented by other citation counts, such as Scopus, Google Scholar, Citeseer and Citebase: each can be a separate metric in the metric equation. Citations from and to books are especially important in some disciplines. > * There are differences in citation behaviour > among STEM and non-STEM as well as different subject disciplines. And probably among many other disciplines too. That is why each discipline's regression equation needs to be validated separately. This will yield a different constellation of metrics as well as of beta weights on the metrics, for different disciplines. > * Metrics do not take into account contextual > information about individuals, which may be relevant. What does this mean? Age, years since degree, discipline, etc. are all themselves metrics, and can be added to the metric equation. > They also do not > always take into account research from across a number of disciplines. Interdisciplinarity is a measurable metric. There are self-citations, co-author citations, small citation circles, specialty-wide citations, discipline-wide citations, and cross-disciplinary citations. These are all endogamy/exogamy metrics. They can be given different weights in fields where, say, interdisciplinarity is highly valued. > * The definition of the broad subject groups and > the assignment of staff and activity to them will need careful > consideration. Is this about RAE panels? Or about how to distribute researchers by discipline or other grouping? > * Bibliometric indicators will need to be linked > to other metrics on research funding and on research postgraduate > training. "Linked"? All metrics need to be considered jointly in a multiple regression equation with the panel rankings (and other validated or face-valid criterion metrics). > * There are potential behavioural effects of using > bibliometrics which may not be picked up for some years Yes, metrics will shape behaviour (just as panel ranking shaped behaviour), sometimes for the better, sometimes for the worse. Metrics can be abused -- but abuses can also be detected and named and shamed, so there are deterrents and correctives. > * There are data limitations where researchers' > outputs are not comprehensively catalogued in bibliometrics databases. The obvious solution for this is Open Access: All UK researchers should deposit *all* their research output in their Institutional Repositories (IRs). Where it is not possible to set access to a deposit as OA, access can be set as Closed Access, but the bibliographic metadata will be there. (The IRs will not only provide access to the texts and the metadata, but they will generate further metrics, such as download counts, chronometrics, etc.) > The report comes ahead of the HEFCE consultation on the future of > research assessment expected to be announced later this month. > Universities UK will consult members once this is published. Let's hope both UUK and HEFCE are still open-minded about ways to optimise the transition to metrics! Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. http://eprints.ecs.soton.ac.uk/7503/ Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. http://eprints.ecs.soton.ac.uk/12130/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. http://eprints.ecs.soton.ac.uk/14329/ Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ > Notes > 1. The report, The use of bibliometrics to measure research > quality in UK higher educations, will be available to download from the > Universities UK website from 9am on Thursday November 8 2007 at > . > > 2. For further press enquiries, please contact the > Universities UK press office or email pressunit at universitiesuk.ac.uk > . > > 3. Universities UK is the major representative body and > membership organisation for the higher education sector. It represents > the UK's universities and some higher education colleges. > > Its 131 members http://www.UniversitiesUK.ac.uk/members/ are the > executive heads of these institutions. > > Universities UK works closely with policy makers and key > education stakeholders to advance the interests of universities and to > spread good practice throughout the higher education sector. > > Founded in 1918 and formerly known as the Committee for > Vice-Chancellors and Principals (CVCP), Universities UK will celebrate > its 90th anniversary in 2008. From Jonathan.adams at EVIDENCE.CO.UK Thu Nov 8 06:49:44 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Thu, 8 Nov 2007 11:49:44 -0000 Subject: UUK report looks at the use of bibliometrics Message-ID: Dear Stephen Thank you for your informed and interesting comments on our report to UUK, though I should say that some of your soundbites are addressed in the body of the report. I am sure UUK would appreciate receiving your extended commentary. Jonathan Adams Evidence Ltd > From: UNIVERSITIES UK PRESSOFFICES > EMBARGO 00.01hrs 8 November 2007 > "This report will help Universities UK to formulate its position on the > development of the new framework for replacing the RAE after 2008." > Some of the points for consideration in the report include: > * Bibliometrics are probably the most useful of a > number of variables that could feasibly be used to measure research > performance. What metrics count as "bibliometrics"? Do downloads? hubs/authorities? Interdisciplinarity metrics? Endogamy/exogamy metrics? chronometrics, semiometrics? > * There is evidence that bibliometric indices do > correlate with other, quasi-independent measures of research quality - > such as RAE grades - across a range of fields in science and > engineering. Meaning that citation counts correlate with panel rankings in all disciplines tested so far. Correct. > * There is a range of bibliometric variables as > possible quality indicators. There are strong arguments against the use > of (i) output volume (ii) citation volume (iii) journal impact and (iv) > frequency of uncited papers. The "strong" arguments are against using any of these variables alone, or without testing and validation. They are not arguments against including them in the battery of candidate metrics to be tested, validated and weighted against the panel rankings, discipline by discipline, in a multiple regression equation. > * 'Citations per paper' is a widely accepted index > in international evaluation. Highly-cited papers are recognised as > identifying exceptional research activity. Citations per paper is one (strong) candidate metric among many, all of which should be co-tested, via multiple regression analysis, against the parallel RAE panel rankings (and other validated or face-valid performance measures). > * Accuracy and appropriateness of citation counts > are a critical factor. Not clear what this means. ISI citation counts should be supplemented by other citation counts, such as Scopus, Google Scholar, Citeseer and Citebase: each can be a separate metric in the metric equation. Citations from and to books are especially important in some disciplines. > * There are differences in citation behaviour > among STEM and non-STEM as well as different subject disciplines. And probably among many other disciplines too. That is why each discipline's regression equation needs to be validated separately. This will yield a different constellation of metrics as well as of beta weights on the metrics, for different disciplines. > * Metrics do not take into account contextual > information about individuals, which may be relevant. What does this mean? Age, years since degree, discipline, etc. are all themselves metrics, and can be added to the metric equation. > They also do not > always take into account research from across a number of disciplines. Interdisciplinarity is a measurable metric. There are self-citations, co-author citations, small citation circles, specialty-wide citations, discipline-wide citations, and cross-disciplinary citations. These are all endogamy/exogamy metrics. They can be given different weights in fields where, say, interdisciplinarity is highly valued. > * The definition of the broad subject groups and > the assignment of staff and activity to them will need careful > consideration. Is this about RAE panels? Or about how to distribute researchers by discipline or other grouping? > * Bibliometric indicators will need to be linked > to other metrics on research funding and on research postgraduate > training. "Linked"? All metrics need to be considered jointly in a multiple regression equation with the panel rankings (and other validated or face-valid criterion metrics). > * There are potential behavioural effects of using > bibliometrics which may not be picked up for some years Yes, metrics will shape behaviour (just as panel ranking shaped behaviour), sometimes for the better, sometimes for the worse. Metrics can be abused -- but abuses can also be detected and named and shamed, so there are deterrents and correctives. > * There are data limitations where researchers' > outputs are not comprehensively catalogued in bibliometrics databases. The obvious solution for this is Open Access: All UK researchers should deposit *all* their research output in their Institutional Repositories (IRs). Where it is not possible to set access to a deposit as OA, access can be set as Closed Access, but the bibliographic metadata will be there. (The IRs will not only provide access to the texts and the metadata, but they will generate further metrics, such as download counts, chronometrics, etc.) > The report comes ahead of the HEFCE consultation on the future of > research assessment expected to be announced later this month. > Universities UK will consult members once this is published. Let's hope both UUK and HEFCE are still open-minded about ways to optimise the transition to metrics! Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. http://eprints.ecs.soton.ac.uk/7503/ Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. http://eprints.ecs.soton.ac.uk/12130/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. http://eprints.ecs.soton.ac.uk/14329/ Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ > Notes > 1. The report, The use of bibliometrics to measure research > quality in UK higher educations, will be available to download from the > Universities UK website from 9am on Thursday November 8 2007 at > . > > 2. For further press enquiries, please contact the > Universities UK press office or email pressunit at universitiesuk.ac.uk > . > > 3. Universities UK is the major representative body and > membership organisation for the higher education sector. It represents > the UK's universities and some higher education colleges. > > Its 131 members http://www.UniversitiesUK.ac.uk/members/ are the > executive heads of these institutions. > > Universities UK works closely with policy makers and key > education stakeholders to advance the interests of universities and to > spread good practice throughout the higher education sector. > > Founded in 1918 and formerly known as the Committee for > Vice-Chancellors and Principals (CVCP), Universities UK will celebrate > its 90th anniversary in 2008. -------------- next part -------------- An HTML attachment was scrubbed... URL: From harnad at ECS.SOTON.AC.UK Fri Nov 9 06:11:01 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 9 Nov 2007 11:11:01 +0000 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: <313201438A789D4A87C7EAE3BE51ED0D293599@merlin.ecs.soton.ac.uk> Message-ID: Comment on: "Bibliometrics could distort research assessment" Guardian Education, Friday 9 November 2007 http://education.guardian.co.uk/RAE/story/0,,2207678,00.html Yes, any system (including democracy, health care, welfare, taxation, market economics, justice, education and the Internet) can be abused. But abuses can be detected, exposed and punished, and this is especially true in the case of scholarly/scientific research, where "peer review" does not stop with publication, but continues for as long as research findings are read and used. And it's truer still if it is all online and openly accessible. The researcher who thinks his research impact can be spuriously enhanced by producing many small, "salami-sliced" publications instead of fewer substantial ones will stand out against peers who publish fewer, more substantial papers. Paper lengths and numbers are metrics too, hence they too can be part of the metric equation. And if most or all peers do salami-slicing, then it becomes a scale factor that can be factored out (and the metric equation and its payoffs can be adjusted to discourage it). Citations inflated by self-citations or co-author group citations can also be detected and weighted accordingly. Robotically inflated download metrics are also detectable, nameable and shameable. Plagiarism is detectable too, when all full-text content is accessible online. The important thing is to get all these publications as well as their metrics out in the open for scrutiny by making them Open Access. Then peer and public scrutiny -- plus the analytic power of the algorithms and the Internet -- can collaborate to keep them honest. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From notsjb at LSU.EDU Fri Nov 9 09:32:32 2007 From: notsjb at LSU.EDU (Stephen J Bensman) Date: Fri, 9 Nov 2007 08:32:32 -0600 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: A Message-ID: The Guardian article is interesting. It seems that the Brits have finally realized what the American Council on Education and National Research Council have known for decades--that to rate research programs you have to define your disciplinary sets carefully and that you should use multiple measures--peer ratings, publication rates, citation rates, grants, awards, etc. Of these, peer ratings are still probably the most important, for the human mind can do what quantitative measures cannot do--encapsulate multiple factors into one number. Despite everything, it is still more of an art form than a science. This particularly true in defining disciplinary sets not only due to the inherent fuzziness of these sets but also due to institutional parameters often not matching subject parameters. Stephen J. Bensman LSU Libraries Louisiana State University Baton Rouge, LA 70803 USA notsjb at lsu.edu -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad Sent: Friday, November 09, 2007 5:11 AM To: SIGMETRICS at listserv.utk.edu Subject: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are At It Again... Comment on: "Bibliometrics could distort research assessment" Guardian Education, Friday 9 November 2007 http://education.guardian.co.uk/RAE/story/0,,2207678,00.html Yes, any system (including democracy, health care, welfare, taxation, market economics, justice, education and the Internet) can be abused. But abuses can be detected, exposed and punished, and this is especially true in the case of scholarly/scientific research, where "peer review" does not stop with publication, but continues for as long as research findings are read and used. And it's truer still if it is all online and openly accessible. The researcher who thinks his research impact can be spuriously enhanced by producing many small, "salami-sliced" publications instead of fewer substantial ones will stand out against peers who publish fewer, more substantial papers. Paper lengths and numbers are metrics too, hence they too can be part of the metric equation. And if most or all peers do salami-slicing, then it becomes a scale factor that can be factored out (and the metric equation and its payoffs can be adjusted to discourage it). Citations inflated by self-citations or co-author group citations can also be detected and weighted accordingly. Robotically inflated download metrics are also detectable, nameable and shameable. Plagiarism is detectable too, when all full-text content is accessible online. The important thing is to get all these publications as well as their metrics out in the open for scrutiny by making them Open Access. Then peer and public scrutiny -- plus the analytic power of the algorithms and the Internet -- can collaborate to keep them honest. Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-For um.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From nouruzi at GMAIL.COM Sat Nov 10 08:53:20 2007 From: nouruzi at GMAIL.COM (Alireza Noruzi) Date: Sat, 10 Nov 2007 17:23:20 +0330 Subject: Webology: Volume 4, Number 3, 2007 Message-ID: Dear All, apologies for cross-posting. We are pleased to inform you that Vol. 4, No. 3 of Webology, an OPEN ACCESS journal, is published and is available ONLINE now. ------------------ Webology: Volume 4, Number 3, September, 2007 TOC: http://www.webology.ir/2007/v4n3/toc.html This issue contains: Editorial - The International Scope of Webology -- Alireza Noruzi -- http://www.webology.ir/2007/v4n3/editorial13.html ----------------------------------------- Articles - Increase of Precision on the Top of the List of Retrieved Web Documents Using Global and Local Link Analysis -- Luiz Fernando de Barros Campos -- Keywords: Link analysis; HITS; PageRank; Space Vector Model; Search engines -- http://www.webology.ir/2007/v4n3/a44.html - International Actions against Cybercrime: Networking Legal Systems in the Networked Crime Scene -- Xingan Li -- Keywords: Cybercrime; Legal system; International harmonization -- http://www.webology.ir/2007/v4n3/a45.html - Cybercrime and the Law: An Islamic View -- Mansoor Al-A'ali -- Keywords: Computer Crime; Computer Crime Law; Texas Law; Islamic Law; Cybercrime -- http://www.webology.ir/2007/v4n3/a46.html ----------------------------------------- Book Reviews - The Information Literacy Cookbook: Ingredients, recipes and tips for success -- Jane Secker, Debbi Boden & Gwyneth Price (Eds.) -- Hamid R. Jamali -- http://www.webology.ir/2007/v4n3/bookreview7.html ----------------------------------------- Call for Papers: -- http://www.webology.ir/cfp.html ========================================= Best regards, Alireza Noruzi, PhD Editor-in-Chief of Webology www.webology.ir ~ The great aim of Open Access journals is knowledge sharing. ~ From loet at LEYDESDORFF.NET Sat Nov 10 16:45:29 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sat, 10 Nov 2007 22:45:29 +0100 Subject: Times Higher Education Supplement Message-ID: Dear colleagues, This week's issue brings the 2007 version of the rankings. Best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated. 385 pp.; US$ 18.95 The Self-Organization of the Knowledge-Based Society; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.hartley at PSY.KEELE.AC.UK Mon Nov 12 07:16:48 2007 From: j.hartley at PSY.KEELE.AC.UK (James Hartley) Date: Mon, 12 Nov 2007 12:16:48 -0000 Subject: Study of clarity in abstracts Message-ID: Study of clarity in abstracts Dear Colleagues I am carrying out a study of the readability of abstracts and I would be most grateful if you would be willing to take part. To read more about the study, please go to http://www.keele.ac.uk/depts/ps/jim07/abs2007.htm The study should take about 10 minutes of your time. Many thanks. Prof. James Hartley School of Psychology Keele University Staffordshire ST5 5BG UK j.hartley at psy.keele.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From havemanf at CMS.HU-BERLIN.DE Tue Nov 13 07:04:40 2007 From: havemanf at CMS.HU-BERLIN.DE (Frank Havemann) Date: Tue, 13 Nov 2007 13:04:40 +0100 Subject: Looking for Gross & Gross 1927 Paper In-Reply-To: Message-ID: Does anyone have a scanned PDF version of this first paper about citation analysis? @article{gross1927cla, title={{College Libraries and Chemical Education}}, author={Gross, PLK and Gross, EM}, journal={Science}, volume={66}, number={1713}, pages={385--389}, year={1927} } Thank you Frank Havemann *************************** Dr. Frank Havemann Department of Library and Information Science Humboldt University Dorotheenstr. 26 D-10099 Berlin Germany tel.: (0049) (030) 2093 4228 http://www.ib.hu-berlin.de/inf/havemann.html From havemanf at CMS.HU-BERLIN.DE Tue Nov 13 07:29:32 2007 From: havemanf at CMS.HU-BERLIN.DE (Frank Havemann) Date: Tue, 13 Nov 2007 13:29:32 +0100 Subject: Looking for Gross & Gross 1927 Paper In-Reply-To: <7.0.1.0.2.20071113132151.020967b8@unimore.it> Message-ID: Many Thanks! Am Dienstag, 13. November 2007 13:23 schrieb Nicola De Bellis: > Article in the attachment. > > Kind regards > Nicola De Bellis > > At 13.04 13/11/2007, you wrote: > >Adminstrative info for SIGMETRICS (for example unsubscribe): > >http://web.utk.edu/~gwhitney/sigmetrics.html > > > >Does anyone have a scanned PDF version of this first paper about citation > >analysis? > > > >@article{gross1927cla, > > title={{College Libraries and Chemical Education}}, > > author={Gross, PLK and Gross, EM}, > > journal={Science}, > > volume={66}, > > number={1713}, > > pages={385--389}, > > year={1927} > >} > > > > > >Thank you > >Frank Havemann > > > > > > > > > >*************************** > >Dr. Frank Havemann > >Department of Library and Information Science > >Humboldt University > >Dorotheenstr. 26 > >D-10099 Berlin > >Germany > > > >tel.: (0049) (030) 2093 4228 > >http://www.ib.hu-berlin.de/inf/havemann.html > > Nicola De Bellis > > Biblioteca Universitaria - Area Medica > Universit? degli Studi di Modena e Reggio Emilia > Via del Pozzo 71 > 41100 MODENA > Tel.: 0039 059-422.3140 > FAX: 0039 059-422.3151 From havemanf at CMS.HU-BERLIN.DE Tue Nov 13 07:45:13 2007 From: havemanf at CMS.HU-BERLIN.DE (Frank Havemann) Date: Tue, 13 Nov 2007 13:45:13 +0100 Subject: Looking for Gross & Gross 1927 Paper In-Reply-To: <1194957462.11280.47.camel@Xubuntu31> Message-ID: Dear Sebastian, many thanks! FH Am Dienstag, 13. November 2007 13:37 schrieben Sie: > Dear Frank, > > > Does anyone have a scanned PDF version of this first paper about citation > > analysis? > > Please find attached the PDF of Gross & Gross 1927. I have even OCRed > it, so you can copy and paste text out of it as well. > > Regards > Sebastian From isidro at CINDOC.CSIC.ES Wed Nov 14 07:26:33 2007 From: isidro at CINDOC.CSIC.ES (Isidro F. Aguillo) Date: Wed, 14 Nov 2007 13:26:33 +0100 Subject: Papers just published in Cybermetrics Message-ID: The volume 11 of the electronic journal Cybermetrics is now completed with the following new three items just published: *** Interdisciplinary relationships in the Spanish academic web space: A Webometric study through networks visualization Jos? Luis Ortega, Isidro F. Aguillo The aim of this work is to describe the interdisplinary research relationships among several Spanish university departments and research groups located in the Spanish web space. 699 web sites from 2390 were selected according to whether a web site receives or links to at least one of the rest. The links between them were extracted with a commercial crawler in 2004 and then analysed to built a complex directed network of in-links and out-links. The results show that the Spanish academic web space is weakly interconnected both at the level of groups and departments, and that the relationships between disciplines can be appreciated through network graphs. The use of network graphs is a suitable technique to show the transversal relationships among disciplines and to detect incipient research fronts in the web space. The web presence of Experimental and Technological Sciences is higher than Social Sciences and Humanities as well. http://www.cindoc.csic.es/cybermetrics/articles/v11i1p4.html *** MAXPROD - A New Index for assessment of the scientific output of an individual, and a comparison with the h-index Marek Kosmulski A new index termed Maxprod is defined as the highest value in the set of ixci where ci is the number of citations of the i-th most cited paper of an individual. For most chemists who have published about 100 papers, Maxprod is only marginally higher than h2 where h is the Hirsch-index. A substantial difference between Maxprod and h2 is observed for atypical distributions of ci. http://www.cindoc.csic.es/cybermetrics/articles/v11i1p5.html *** The Challenge of Cybermetrics. Review of the Book: "The Knowledge-Based Economy: Modeled, Measured, Simulated" by Loet Leydesdorff Gaston Heimeriks http://www.cindoc.csic.es/cybermetrics/articles/v11i1r1.html -- ========================== Isidro F. Aguillo Cybermetrics Lab isidro @ cindoc.csic.es CINDOC - CSIC Joaqu?n Costa, 22 28002 Madrid. Spain 34-91-5635482 ext 313 ========================== From prabirgd11 at REDIFFMAIL.COM Thu Nov 15 09:19:00 2007 From: prabirgd11 at REDIFFMAIL.COM (Prabir G. Dastidar) Date: Thu, 15 Nov 2007 14:19:00 -0000 Subject: EMERGENCY !!! (NEED YOUR URGENT HELP) Message-ID: An embedded and charset-unspecified text was scrubbed... Name: not available URL: From einat at IL.IBM.COM Thu Nov 15 14:33:56 2007 From: einat at IL.IBM.COM (Einat Amitay) Date: Thu, 15 Nov 2007 21:33:56 +0200 Subject: CFP: TWeb special issue on Query Log Analysis Message-ID: ============================================================= Call for Papers Special issue on Query Log Analysis: Technology & Ethics ACM Transactions on the Web (TWEB) http://www.acm.org/tweb ============================================================= GUEST EDITORS Einat Amitay, IBM Research, Haifa Lab, Israel, e-mail : einat at il.ibm.com Andrei Broder, Yahoo! Research, USA, e-mail: broder at yahoo-inc.com The complete records of queries received by web search engines (Query Logs or QLs) are the fundamental evidence of their audience search goals and the engines? ability to provide satisfactory answers. QLs include information such as queries submitted, reformulations, session boundaries, results actually explored (click through data), time spent reading each result, and so on. Commercial search engines use QL analysis to extract patterns of individual and collective behavior and use this feedback for improving search performance and accuracy. At the same time, the queries made by a particular individual often reflect their current interests and preoccupation and reveal a surprising amount of highly personal information. Hence the use of QLs raises issues of privacy and data ownership, which in turn gives rise to technical problems of QL anonymization and data security and legal and ethical problems regarding the use and retention of QLs. This special issue of ACM Transactions on the Web aims to gather a collection of high quality contributions that reflect both technical and non-technical issues related to QL analysis. Particular areas of interest include, but are not limited to: Search & Ranking: Use of QLs for ? ranking; query refinement and expansion; behavior prediction; implicit collaborative filtering; targeted advertising; document expansion; document clustering; metadata creation; etc; Data mining & prediction: QL trend extraction; ?Buzz? mining; Product performance prediction; Correlation of QL data with external events; Patterns of information-seeking & interaction behaviors observed in QLs; Performance & Evaluation: Query caching based on QL patterns; Sampling QLs; QL-based search quality evaluation; QL-based models of relevance; Privacy & Policy: QL anonymization; Privacy preserving tools (e.g. query blocking, masking and obfuscation); Privacy preserving QLs analysis. Legal, regulatory, and ethical issues in QL collection and use; Practical policies and practices of query logging; QL data retention issues. Prospective authors, please submit your paper according to the directions on the ACM TWEB Web site following the content and formatting guidelines available at http://www.acm.org/tweb/author.html . There you can also find detailed information about the ACM TWEB review process. When submitting your paper, please mention that it is to be considered for the special issue on Query Log Analysis: Technology & Ethics. In addition, please send a copy of your paper to einat at il.ibm.com IMPORTANT DATES Papers Due: December 17, 2007 Author notification: March 31, 2008 Revised versions of accepted papers due: June 16, 2008 (all accepted papers expected to undergo minor set of revisions) Camera-ready copies due: August 1, 2008: Issue published; November 2008 From loet at LEYDESDORFF.NET Fri Nov 16 02:34:18 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 16 Nov 2007 08:34:18 +0100 Subject: Dynamic journal-journal citation network Message-ID: Dear colleagues, Please, find at http://www.leydesdorff.net/socnetw/index.htm an animation of the dynamic development of the citation impact environment of the journal Social Networks during the period 1994-2006. (You may need to install the Adobe SVG Viewer and the preferred browser is Internet Explorer.) Comments and suggestions for improvements are very welcome in this stage. This is just an example. With best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated. 385 pp.; US$ 18.95 The Self-Organization of the Knowledge-Based Society; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jonathan.adams at EVIDENCE.CO.UK Fri Nov 16 06:29:18 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Fri, 16 Nov 2007 11:29:18 -0000 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... Message-ID: Stevan, Following on from your comments at the end of last week (below) I agree that it is possible tentatively to pick out 'over-production' of poor quality papers (although I am less optimistic about the comprehensive analytical detection of publication abuse you foresee). By contrast to over-production, do you think that an assessment system that looks at total output would run the risk of suppressing outputs that might be predicted to be cited less frequently? UK research assessment currently looks at four outputs per researcher, usually selected by the individual as their best research. The proposal is that post-2008 the metrics assessment would be of all output, creating a profile and then deriving a metric derived from that. Is there a risk that researchers, realising that outputs aimed at practitioners often appear in relatively lower impact journals, would then tend to reduce the number of papers they produced aimed at transferring knowledge from the research base and concentrate on outputs targeted at high-impact journals in the research-base core? They would expect by doing so to avoid dilution of their citation average. The net effect could be to reduce the UK's volume of less frequently cited papers, but also to reduce information flow to the people who turn research into practice. Jonathan Adams Director, Evidence Ltd + 44 113 384 5680 Comment on: "Bibliometrics could distort research assessment" Guardian Education, Friday 9 November 2007 http://education.guardian.co.uk/RAE/story/0,,2207678,00.html Yes, any system (including democracy, health care, welfare, taxation, market economics, justice, education and the Internet) can be abused. But abuses can be detected, exposed and punished, and this is especially true in the case of scholarly/scientific research, where "peer review" does not stop with publication, but continues for as long as research findings are read and used. And it's truer still if it is all online and openly accessible. The researcher who thinks his research impact can be spuriously enhanced by producing many small, "salami-sliced" publications instead of fewer substantial ones will stand out against peers who publish fewer, more substantial papers. Paper lengths and numbers are metrics too, hence they too can be part of the metric equation. And if most or all peers do salami-slicing, then it becomes a scale factor that can be factored out (and the metric equation and its payoffs can be adjusted to discourage it). Citations inflated by self-citations or co-author group citations can also be detected and weighted accordingly. Robotically inflated download metrics are also detectable, nameable and shameable. Plagiarism is detectable too, when all full-text content is accessible online. The important thing is to get all these publications as well as their metrics out in the open for scrutiny by making them Open Access. Then peer and public scrutiny -- plus the analytic power of the algorithms and the Internet -- can collaborate to keep them honest. From loet at LEYDESDORFF.NET Fri Nov 16 07:05:07 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 16 Nov 2007 13:05:07 +0100 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAED7@evidence1.Evidence.local> Message-ID: > The proposal > is that post-2008 the metrics assessment would be of all output, > creating a profile and then deriving a metric derived from that. Dear Jonathan, How are you planning to do this? Interesting. Best wishes, Loet From Jonathan.adams at EVIDENCE.CO.UK Fri Nov 16 07:26:24 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Fri, 16 Nov 2007 12:26:24 -0000 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... Message-ID: I'm not planning this. I understand the recommendation is from our colleagues at Leiden, in a report to HEFCE that will be made public later this month. So far as outputs in Thomson-indexed journals go, I think it's feasible (and we have been analysing some scenarios using earlier data reconciliation and analyses we did for HEFCE) but I wouldn't recommend it because of the games playing that I suspect would ensue. Jonathan Adams Director, Evidence Ltd + 44 113 384 5680 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff Sent: 16 November 2007 12:05 To: SIGMETRICS at listserv.utk.edu Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are At It Again... > The proposal > is that post-2008 the metrics assessment would be of all output, > creating a profile and then deriving a metric derived from that. Dear Jonathan, How are you planning to do this? Interesting. Best wishes, Loet From loet at LEYDESDORFF.NET Fri Nov 16 07:40:36 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 16 Nov 2007 13:40:36 +0100 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAEE5@evidence1.Evidence.local> Message-ID: If I correctly remember, the Leiden normalization implies that one compares the citation scores with the expected citation scores given the publication profile of a group. You are right that a game follows naturally: if one publishes in journals below one's level, one can expect to obtain a higher than expected citation score. Since all distributions are skewed, this effect would be reinforced. Hitherto, this has not been a major problem because the scores where not directly related as input to funding. With best wishes, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Adams > Sent: Friday, November 16, 2007 1:26 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > Babblarazzi Are At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I'm not planning this. I understand the recommendation is from our > colleagues at Leiden, in a report to HEFCE that will be made public > later this month. > So far as outputs in Thomson-indexed journals go, I think > it's feasible > (and we have been analysing some scenarios using earlier data > reconciliation and analyses we did for HEFCE) but I wouldn't recommend > it because of the games playing that I suspect would ensue. > > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 16 November 2007 12:05 > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > Babblarazzi Are > At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > > The proposal > > is that post-2008 the metrics assessment would be of all output, > > creating a profile and then deriving a metric derived from that. > > Dear Jonathan, > > How are you planning to do this? Interesting. > > Best wishes, > > > Loet > From Jonathan.adams at EVIDENCE.CO.UK Fri Nov 16 08:20:36 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Fri, 16 Nov 2007 13:20:36 -0000 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... Message-ID: Exactly so. So long as the indicators remain separate from policy effects (such as funding) they remain sound. The problem arises where indicators become a target for the purpose of conducting policy, as Goddard (1975) suggested in regard to UK economics: http://en.wikipedia.org/wiki/Goodhart%27s_law And Campbell (1976) in relation to social science http://en.wikipedia.org/wiki/Campbell%27s_Law Jonathan Adams Director, Evidence Ltd + 44 113 384 5680 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff Sent: 16 November 2007 12:41 To: SIGMETRICS at listserv.utk.edu Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are At It Again... If I correctly remember, the Leiden normalization implies that one compares the citation scores with the expected citation scores given the publication profile of a group. You are right that a game follows naturally: if one publishes in journals below one's level, one can expect to obtain a higher than expected citation score. Since all distributions are skewed, this effect would be reinforced. Hitherto, this has not been a major problem because the scores where not directly related as input to funding. With best wishes, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Adams > Sent: Friday, November 16, 2007 1:26 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > Babblarazzi Are At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I'm not planning this. I understand the recommendation is from our > colleagues at Leiden, in a report to HEFCE that will be made public > later this month. > So far as outputs in Thomson-indexed journals go, I think > it's feasible > (and we have been analysing some scenarios using earlier data > reconciliation and analyses we did for HEFCE) but I wouldn't recommend > it because of the games playing that I suspect would ensue. > > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 16 November 2007 12:05 > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > Babblarazzi Are > At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > > The proposal > > is that post-2008 the metrics assessment would be of all output, > > creating a profile and then deriving a metric derived from that. > > Dear Jonathan, > > How are you planning to do this? Interesting. > > Best wishes, > > > Loet > From loet at LEYDESDORFF.NET Fri Nov 16 09:00:59 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 16 Nov 2007 15:00:59 +0100 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAEFC@evidence1.Evidence.local> Message-ID: In my opinion, it is based on a methodological confusion which we discussed with Steven Harnad previously on this list. The idea of a metric is based on a multivariate model. The measurement then serves the estimation of the parameters. The model guides the prediction. In this case, the measurement is used as an independent predictor and the model is not specified. The political system can then play with the rankings which can vary according to the weights which one attributes to the various parameters (e.g., in the normalization). For example, the recent university rankings in the Times Higher Education Supplement of last week. If one would propose to divide the various scores by the budgets in order to estimate the efficiency of output/input, the American and British universities which are now leading the ranking would probably be at the bottom. :-) Best wishes, Loet On 11/16/07, Jonathan Adams wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Exactly so. So long as the indicators remain separate from policy > effects (such as funding) they remain sound. > The problem arises where indicators become a target for the purpose of > conducting policy, as Goddard (1975) suggested in regard to UK > economics: > http://en.wikipedia.org/wiki/Goodhart%27s_law > And Campbell (1976) in relation to social science > http://en.wikipedia.org/wiki/Campbell%27s_Law > > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 16 November 2007 12:41 > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are > At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I correctly remember, the Leiden normalization implies that one > compares > the citation scores with the expected citation scores given the > publication > profile of a group. You are right that a game follows naturally: if one > publishes in journals below one's level, one can expect to obtain a > higher > than expected citation score. Since all distributions are skewed, this > effect would be reinforced. > > Hitherto, this has not been a major problem because the scores where not > directly related as input to funding. > > With best wishes, > > > Loet > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Adams > > Sent: Friday, November 16, 2007 1:26 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > > Babblarazzi Are At It Again... > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I'm not planning this. I understand the recommendation is from our > > colleagues at Leiden, in a report to HEFCE that will be made public > > later this month. > > So far as outputs in Thomson-indexed journals go, I think > > it's feasible > > (and we have been analysing some scenarios using earlier data > > reconciliation and analyses we did for HEFCE) but I wouldn't recommend > > it because of the games playing that I suspect would ensue. > > > > Jonathan Adams > > > > Director, Evidence Ltd > > + 44 113 384 5680 > > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > > Sent: 16 November 2007 12:05 > > To: SIGMETRICS at listserv.utk.edu > > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > > Babblarazzi Are > > At It Again... > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > The proposal > > > is that post-2008 the metrics assessment would be of all output, > > > creating a profile and then deriving a metric derived from that. > > > > Dear Jonathan, > > > > How are you planning to do this? Interesting. > > > > Best wishes, > > > > > > Loet > > > -- Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ --------------------------------------- Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; From harnad at ECS.SOTON.AC.UK Fri Nov 16 09:17:09 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 16 Nov 2007 14:17:09 +0000 Subject: Continuous multi-metric research assessment In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAED7@evidence1.Evidence.local> Message-ID: On Fri, 16 Nov 2007, Jonathan Adams (Director, Evidence Ltd) wrote: > Stevan > Following on from your comments at the end of last week (below) I agree > that it is possible tentatively to pick out 'over-production' of poor > quality papers (although I am less optimistic about the comprehensive > analytical detection of publication abuse you foresee). Jonathan, I think you may be greatly underestimating (1) the power of multivariate (as opposed to univariate) analysis, validation and weighting as well as (2) the power of open access (i.e., online, public, pervasive, continuous, and dynamic) metrics. You get a completely different sense of what is possible, and how, if you think in terms of: (i) individual, isolated metrics, assessed at long intervals under closed scrutiny (like the current RAEs) or if you think instead in terms of: (ii) a large (possibly growing) battery of candidate metrics, assessed jointly and continuously rather than at long intervals, with the contribution of each metric to their joint predictive power initially validated against existing criteria that have been relied on before (such as the RAE panel rankings) and then updated dynamically, field by field, by adjusting the weights on each component metric -- and always under open scrutiny. Not only can "overproduction" of lightweight papers be detected and weighted by simply profiling on the joint relation between (say) the article count, the article citation count, the journal citation average ("impact factor") and the journal download count -- but so can other anomalous or abusive profiles be detected, exposed, and penalized and discouraged through weighting. > By contrast to over-production, do you think that an assessment system > that looks at total output would run the risk of suppressing outputs > that might be predicted to be cited less frequently? Not unless it is decided (for some unknown a-priori reason!) that a profile consisting of N highly cited papers plus M less cited papers is to be given a lower weight that a profile consisting of N highly cited papers plus 0 less cited papers! > UK research assessment currently looks at four outputs per researcher, > usually selected by the individual as their best research. That, of course, was a foolish, arbitrary constraint all along: It was (well-meaningly) intended to minimise both salami-slicing and the number of papers the panel would have to read. But of course continuous OA metrics solve both problems, as they can detect and weight the salami-slicing profile, and panel-reading (after the validation phase) is no longer a factor, except as a periodical higher-level check on the continuous, dynamic weightings and profiles. (So let all papers be considered, continuously, and let 1000 metrics bloom, under open peer scrutiny, and panel monitoring and weight calibration!) > The proposal is that post-2008 the metrics assessment would be of all > output, creating a profile and then deriving a metric derived from > that. "A" metric? Or a battery of metrics? (The "h-index" and its ilk are all examples of a-priori, unvalidated, fixed, 1-number metrics; what is needed is a rich multiple regression equation, with adjustable weights, validated initially against the 2001 and 2008 RAE panel rankings. You can add prewired metrics like the h-index to the battery, but don't use them *instead* of a weighted, multimetric battery.) > Is there a risk that researchers, realizing that outputs aimed at > practitioners often appear in relatively lower impact journals, would > then tend to reduce the number of papers they produced aimed at > transferring knowledge from the research base and concentrate on outputs > targeted at high-impact journals in the research-base core? They > would expect by doing so to avoid dilution of their citation average. This would be faulty reasoning on the part of researchers, if there were a continuous, multi-metric equation in place, with its weights being dynamically updated under peer scrutiny to detect and weight exactly this sort of practise! If applications are valued in a field, add application metrics: Are certain journals more applications oriented? Crank up their weight! Is it better to partition citations into basic vs. applied journals, with differential weights for citations in the one and the other in certain fields? Do so. Don't just think of a univariate measure (citations, or h-index) and how authors might bias that measure by altering the kind of journals they publish in, or the number of articles they submit for assessment! Think multivariately, dynamically, and openly. New applications metrics, besides journal types, might include downloads, or even (if possible) industrial IP downloads; patents are also metrics. Depending on the field, there will no doubt be other measurable, monitorable performance indicators for applications impact (and for teaching impact too!). It's not all about ways to bias one single citation metric, but about developing richer metrics. If the worry is about encouraging technology transfer and applications flow, find objective measures of it and plug it into the equation. Don't treat it as just a default bias, to be minimized by cutting down on metrics! > The net effect could be to reduce the UK's volume of less frequently > cited papers, but also to reduce information flow to the people who > turn research into practice. This is again univariate thinking. Yes, citation counts are important, but there are citations and citations. Basic citations, applied citations. Basic publications, applied publications. Not only do fields have to compare like with like, but their preferred blends can be weighted and rewarded accordingly. (This, by the way, is not "biasing", any more than mandating and rewarding publication itself is biasing: it is providing incentives for the kind of research performance we want, and that we want to reward. Continuous multivariate OA metrics allow preferred profiles to be rewarded and encouraged dynamically. Cheater detection allows self-citations, robotic or anomalous download inflation, salami-slicing, etc. to be detected, exposed and penalized. Metrics are not ends in themselves, they are merely objective performance correlates. They are easy to abuse singly, but much harder to abuse jointly, and in the open.) The UK's RAE is unique; so is its new conversion to metrics. The UK is hence leading the world in research metrics. Don't think cravenly in terms of how the UK will stack up in terms of existing, unvalidated, univariate metrics. Think in terms of establishing metric standards for the entire world research community in the metric OA era! Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ Stevan Harnad > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > Comment on: "Bibliometrics could distort research assessment" > Guardian Education, Friday 9 November 2007 > http://education.guardian.co.uk/RAE/story/0,,2207678,00.html > > Yes, any system (including democracy, health care, welfare, taxation, > market economics, justice, education and the Internet) can be abused. > But > abuses can be detected, exposed and punished, and this is especially > true in the case of scholarly/scientific research, where "peer review" > does not stop with publication, but continues for as long as research > findings are read and used. And it's truer still if it is all online and > openly accessible. > > The researcher who thinks his research impact can be spuriously enhanced > by producing many small, "salami-sliced" publications instead of fewer > substantial ones will stand out against peers who publish fewer, more > substantial papers. Paper lengths and numbers are metrics too, hence > they too can be part of the metric equation. And if most or all peers do > salami-slicing, then it becomes a scale factor that can be factored out > (and the metric equation and its payoffs can be adjusted to discourage > it). > > Citations inflated by self-citations or co-author group citations can > also be detected and weighted accordingly. Robotically inflated download > metrics are also detectable, nameable and shameable. Plagiarism is > detectable too, when all full-text content is accessible online. > > The important thing is to get all these publications as well as their > metrics out in the open for scrutiny by making them Open Access. Then > peer and public scrutiny -- plus the analytic power of the algorithms > and the Internet -- can collaborate to keep them honest. > From Jonathan.adams at EVIDENCE.CO.UK Fri Nov 16 09:28:19 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Fri, 16 Nov 2007 14:28:19 -0000 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... Message-ID: I recall the earlier debate. I hear what you're saying, but disagree with the disconnect you imply ('the model is not specified'). An expert group knows very well the relationship between measurement via weighting to model. Researchers are such an expert group, as are financial market makers. But your point on THES rankings is well taken. Excellence does not come cheap. The amount that Harvard earned on its endowment fund last year ($5.7 Billion growth) is equivalent to 50 per cent of the entire HEFCE grant to all English universities in the same period! Sincere regards, Jonathan Adams Director, Evidence Ltd + 44 113 384 5680 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Loet Leydesdorff Sent: 16 November 2007 14:01 To: SIGMETRICS at LISTSERV.UTK.EDU Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are At It Again... In my opinion, it is based on a methodological confusion which we discussed with Steven Harnad previously on this list. The idea of a metric is based on a multivariate model. The measurement then serves the estimation of the parameters. The model guides the prediction. In this case, the measurement is used as an independent predictor and the model is not specified. The political system can then play with the rankings which can vary according to the weights which one attributes to the various parameters (e.g., in the normalization). For example, the recent university rankings in the Times Higher Education Supplement of last week. If one would propose to divide the various scores by the budgets in order to estimate the efficiency of output/input, the American and British universities which are now leading the ranking would probably be at the bottom. :-) Best wishes, Loet On 11/16/07, Jonathan Adams wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Exactly so. So long as the indicators remain separate from policy > effects (such as funding) they remain sound. > The problem arises where indicators become a target for the purpose of > conducting policy, as Goddard (1975) suggested in regard to UK > economics: > http://en.wikipedia.org/wiki/Goodhart%27s_law > And Campbell (1976) in relation to social science > http://en.wikipedia.org/wiki/Campbell%27s_Law > > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > Sent: 16 November 2007 12:41 > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The Babblarazzi Are > At It Again... > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I correctly remember, the Leiden normalization implies that one > compares > the citation scores with the expected citation scores given the > publication > profile of a group. You are right that a game follows naturally: if one > publishes in journals below one's level, one can expect to obtain a > higher > than expected citation score. Since all distributions are skewed, this > effect would be reinforced. > > Hitherto, this has not been a major problem because the scores where not > directly related as input to funding. > > With best wishes, > > > Loet > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Adams > > Sent: Friday, November 16, 2007 1:26 PM > > To: SIGMETRICS at LISTSERV.UTK.EDU > > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > > Babblarazzi Are At It Again... > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > I'm not planning this. I understand the recommendation is from our > > colleagues at Leiden, in a report to HEFCE that will be made public > > later this month. > > So far as outputs in Thomson-indexed journals go, I think > > it's feasible > > (and we have been analysing some scenarios using earlier data > > reconciliation and analyses we did for HEFCE) but I wouldn't recommend > > it because of the games playing that I suspect would ensue. > > > > Jonathan Adams > > > > Director, Evidence Ltd > > + 44 113 384 5680 > > > > > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff > > Sent: 16 November 2007 12:05 > > To: SIGMETRICS at listserv.utk.edu > > Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The > > Babblarazzi Are > > At It Again... > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > The proposal > > > is that post-2008 the metrics assessment would be of all output, > > > creating a profile and then deriving a metric derived from that. > > > > Dear Jonathan, > > > > How are you planning to do this? Interesting. > > > > Best wishes, > > > > > > Loet > > > -- Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ --------------------------------------- Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated, 385 pp.; US$ 18.95; From Jonathan.adams at EVIDENCE.CO.UK Fri Nov 16 09:33:31 2007 From: Jonathan.adams at EVIDENCE.CO.UK (Jonathan Adams) Date: Fri, 16 Nov 2007 14:33:31 -0000 Subject: Continuous multi-metric research assessment Message-ID: Stevan I think your enthusiasm is great, and long may it continue, but I am less certain about the transparency in your metrics utopia. There will of course be multiple metrics in the algorithm but ultimately they condense around the funding allocated. So, at the point where citation metrics are combined with different kinds of variable such as funding, they have to condense to a single number to be weighted against the other factors. But I take your other points. A long and winding road lies ahead. Regards Jonathan Adams Director, Evidence Ltd + 44 113 384 5680 -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad Sent: 16 November 2007 14:17 To: SIGMETRICS at LISTSERV.UTK.EDU Subject: [SIGMETRICS] Continuous multi-metric research assessment On Fri, 16 Nov 2007, Jonathan Adams (Director, Evidence Ltd) wrote: > Stevan > Following on from your comments at the end of last week (below) I agree > that it is possible tentatively to pick out 'over-production' of poor > quality papers (although I am less optimistic about the comprehensive > analytical detection of publication abuse you foresee). Jonathan, I think you may be greatly underestimating (1) the power of multivariate (as opposed to univariate) analysis, validation and weighting as well as (2) the power of open access (i.e., online, public, pervasive, continuous, and dynamic) metrics. You get a completely different sense of what is possible, and how, if you think in terms of: (i) individual, isolated metrics, assessed at long intervals under closed scrutiny (like the current RAEs) or if you think instead in terms of: (ii) a large (possibly growing) battery of candidate metrics, assessed jointly and continuously rather than at long intervals, with the contribution of each metric to their joint predictive power initially validated against existing criteria that have been relied on before (such as the RAE panel rankings) and then updated dynamically, field by field, by adjusting the weights on each component metric -- and always under open scrutiny. Not only can "overproduction" of lightweight papers be detected and weighted by simply profiling on the joint relation between (say) the article count, the article citation count, the journal citation average ("impact factor") and the journal download count -- but so can other anomalous or abusive profiles be detected, exposed, and penalized and discouraged through weighting. > By contrast to over-production, do you think that an assessment system > that looks at total output would run the risk of suppressing outputs > that might be predicted to be cited less frequently? Not unless it is decided (for some unknown a-priori reason!) that a profile consisting of N highly cited papers plus M less cited papers is to be given a lower weight that a profile consisting of N highly cited papers plus 0 less cited papers! > UK research assessment currently looks at four outputs per researcher, > usually selected by the individual as their best research. That, of course, was a foolish, arbitrary constraint all along: It was (well-meaningly) intended to minimise both salami-slicing and the number of papers the panel would have to read. But of course continuous OA metrics solve both problems, as they can detect and weight the salami-slicing profile, and panel-reading (after the validation phase) is no longer a factor, except as a periodical higher-level check on the continuous, dynamic weightings and profiles. (So let all papers be considered, continuously, and let 1000 metrics bloom, under open peer scrutiny, and panel monitoring and weight calibration!) > The proposal is that post-2008 the metrics assessment would be of all > output, creating a profile and then deriving a metric derived from > that. "A" metric? Or a battery of metrics? (The "h-index" and its ilk are all examples of a-priori, unvalidated, fixed, 1-number metrics; what is needed is a rich multiple regression equation, with adjustable weights, validated initially against the 2001 and 2008 RAE panel rankings. You can add prewired metrics like the h-index to the battery, but don't use them *instead* of a weighted, multimetric battery.) > Is there a risk that researchers, realizing that outputs aimed at > practitioners often appear in relatively lower impact journals, would > then tend to reduce the number of papers they produced aimed at > transferring knowledge from the research base and concentrate on outputs > targeted at high-impact journals in the research-base core? They > would expect by doing so to avoid dilution of their citation average. This would be faulty reasoning on the part of researchers, if there were a continuous, multi-metric equation in place, with its weights being dynamically updated under peer scrutiny to detect and weight exactly this sort of practise! If applications are valued in a field, add application metrics: Are certain journals more applications oriented? Crank up their weight! Is it better to partition citations into basic vs. applied journals, with differential weights for citations in the one and the other in certain fields? Do so. Don't just think of a univariate measure (citations, or h-index) and how authors might bias that measure by altering the kind of journals they publish in, or the number of articles they submit for assessment! Think multivariately, dynamically, and openly. New applications metrics, besides journal types, might include downloads, or even (if possible) industrial IP downloads; patents are also metrics. Depending on the field, there will no doubt be other measurable, monitorable performance indicators for applications impact (and for teaching impact too!). It's not all about ways to bias one single citation metric, but about developing richer metrics. If the worry is about encouraging technology transfer and applications flow, find objective measures of it and plug it into the equation. Don't treat it as just a default bias, to be minimized by cutting down on metrics! > The net effect could be to reduce the UK's volume of less frequently > cited papers, but also to reduce information flow to the people who > turn research into practice. This is again univariate thinking. Yes, citation counts are important, but there are citations and citations. Basic citations, applied citations. Basic publications, applied publications. Not only do fields have to compare like with like, but their preferred blends can be weighted and rewarded accordingly. (This, by the way, is not "biasing", any more than mandating and rewarding publication itself is biasing: it is providing incentives for the kind of research performance we want, and that we want to reward. Continuous multivariate OA metrics allow preferred profiles to be rewarded and encouraged dynamically. Cheater detection allows self-citations, robotic or anomalous download inflation, salami-slicing, etc. to be detected, exposed and penalized. Metrics are not ends in themselves, they are merely objective performance correlates. They are easy to abuse singly, but much harder to abuse jointly, and in the open.) The UK's RAE is unique; so is its new conversion to metrics. The UK is hence leading the world in research metrics. Don't think cravenly in terms of how the UK will stack up in terms of existing, unvalidated, univariate metrics. Think in terms of establishing metric standards for the entire world research community in the metric OA era! Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ Stevan Harnad > Jonathan Adams > > Director, Evidence Ltd > + 44 113 384 5680 > > Comment on: "Bibliometrics could distort research assessment" > Guardian Education, Friday 9 November 2007 > http://education.guardian.co.uk/RAE/story/0,,2207678,00.html > > Yes, any system (including democracy, health care, welfare, taxation, > market economics, justice, education and the Internet) can be abused. > But > abuses can be detected, exposed and punished, and this is especially > true in the case of scholarly/scientific research, where "peer review" > does not stop with publication, but continues for as long as research > findings are read and used. And it's truer still if it is all online and > openly accessible. > > The researcher who thinks his research impact can be spuriously enhanced > by producing many small, "salami-sliced" publications instead of fewer > substantial ones will stand out against peers who publish fewer, more > substantial papers. Paper lengths and numbers are metrics too, hence > they too can be part of the metric equation. And if most or all peers do > salami-slicing, then it becomes a scale factor that can be factored out > (and the metric equation and its payoffs can be adjusted to discourage > it). > > Citations inflated by self-citations or co-author group citations can > also be detected and weighted accordingly. Robotically inflated download > metrics are also detectable, nameable and shameable. Plagiarism is > detectable too, when all full-text content is accessible online. > > The important thing is to get all these publications as well as their > metrics out in the open for scrutiny by making them Open Access. Then > peer and public scrutiny -- plus the analytic power of the algorithms > and the Internet -- can collaborate to keep them honest. > From harnad at ECS.SOTON.AC.UK Fri Nov 16 10:56:54 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 16 Nov 2007 10:56:54 -0500 Subject: Continuous multi-metric research assessment In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAF0B@evidence1.Evidence.local> Message-ID: On 16-Nov-07, at 9:33 AM, Jonathan Adams wrote: > Stevan > I think your enthusiasm is great, and long may it continue, but I am > less certain about the transparency in your metrics utopia. > There will of course be multiple metrics in the algorithm but > ultimately > they condense around the funding allocated. So, at the point where > citation metrics are combined with different kinds of variable such as > funding, they have to condense to a single number to be weighted > against > the other factors. Jonathan: "Condense on funding"? I am not sure what that means. If you have N metric predictors, one of which is prior funding, you first initialize them by multiply-regressing them on the criterion (the panel rankings), to validate them. That gives you initial beta weights on each of the N metrics. Each beta weight indicates what percentage of the variation in the criterion (the panel rankings) each metric can predict. Some metrics will have higher beta weights, some lower, some none. Then comes the adjustment of the weights. If it should turn out that in some fields the prior-funding metric has a heavy beta weight, we may want to reduce that, so as not to allow the RAE rank to just become just a multiplier on the prior-funding level. This would preserve the Dual Funding system. Otherwise, an RAE rank dominated by the prior-funding rank would simply "condense" the dual funding system into a single funding system. Having initialized the beta weights by validating them against the panel rankings, some fields may want to calibrate them further, not just to down-weight prior funding but (to use your example) to up- weight applications journal publications, rather than allow the high- citation basic journal articles to dominate. The initial panel rankings are the launching point, but after that, the peer panels' role in continuous assessment should be to fine-tune the weights on each variable, according to the peers' criteria for the field. This is not done on an ad hoc basis, to favor one institution or author over another (as institutions and authors are sometimes wont to do, self-servingly) but in order to generate weightings that are rational and equitable according to the peer panel's judgment of the needs of the field. In the old, non-metric RAE, the peer panels did all the ranking; in the new metric ranking, they simply fine-tune the metrically generated rankings, by adjusting the weights. And yes, the fact that the assessment will be open, continuous and multi-metric will not only be a source of information to all, but it will expose and protect against abuse; and it will allow the assessment system to be flexible and adaptive, based on objective data patterns, dynamically tuned, rather than rigid and a-prioristic. Best wishes, Stevan Harnad > Director, Evidence Ltd > + 44 113 384 5680 > > > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad > Sent: 16 November 2007 14:17 > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] Continuous multi-metric research assessment > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Fri, 16 Nov 2007, Jonathan Adams (Director, Evidence Ltd) wrote: > >> Stevan >> Following on from your comments at the end of last week (below) I > agree >> that it is possible tentatively to pick out 'over-production' of poor >> quality papers (although I am less optimistic about the comprehensive >> analytical detection of publication abuse you foresee). > > Jonathan, > > I think you may be greatly underestimating (1) the power of > multivariate > (as opposed to univariate) analysis, validation and weighting as well > as (2) the power of open access (i.e., online, public, pervasive, > continuous, and dynamic) metrics. > > You get a completely different sense of what is possible, and how, if > you think in terms of: > > (i) individual, isolated metrics, assessed at long intervals > under > closed scrutiny (like the current RAEs) > > or if you think instead in terms of: > > (ii) a large (possibly growing) battery of candidate metrics, > assessed jointly and continuously rather than at long intervals, > with the contribution of each metric to their joint predictive > power > initially validated against existing criteria that have been > relied > on > before (such as the RAE panel rankings) and then updated > dynamically, > field by field, by adjusting the weights on each component metric > -- and always under open scrutiny. > > Not only can "overproduction" of lightweight papers be detected and > weighted by simply profiling on the joint relation between (say) the > article count, the article citation count, the journal citation > average > ("impact factor") and the journal download count -- but so can other > anomalous or abusive profiles be detected, exposed, and penalized > and discouraged through weighting. > >> By contrast to over-production, do you think that an assessment >> system >> that looks at total output would run the risk of suppressing outputs >> that might be predicted to be cited less frequently? > > Not unless it is decided (for some unknown a-priori reason!) that a > profile consisting of N highly cited papers plus M less cited > papers is > to be given a lower weight that a profile consisting of N highly cited > papers plus 0 less cited papers! > >> UK research assessment currently looks at four outputs per >> researcher, >> usually selected by the individual as their best research. > > That, of course, was a foolish, arbitrary constraint all along: It > was (well-meaningly) intended to minimise both salami-slicing and the > number of papers the panel would have to read. But of course > continuous > OA metrics solve both problems, as they can detect and weight the > salami-slicing profile, and panel-reading (after the validation phase) > is no longer a factor, except as a periodical higher-level check on > the continuous, dynamic weightings and profiles. (So let all papers > be considered, continuously, and let 1000 metrics bloom, under open > peer scrutiny, and panel monitoring and weight calibration!) > >> The proposal is that post-2008 the metrics assessment would be of all >> output, creating a profile and then deriving a metric derived from >> that. > > "A" metric? Or a battery of metrics? (The "h-index" and its ilk are > all > examples of a-priori, unvalidated, fixed, 1-number metrics; what is > needed is a rich multiple regression equation, with adjustable > weights, > validated initially against the 2001 and 2008 RAE panel rankings. You > can add prewired metrics like the h-index to the battery, but don't > use > them *instead* of a weighted, multimetric battery.) > >> Is there a risk that researchers, realizing that outputs aimed at >> practitioners often appear in relatively lower impact journals, would >> then tend to reduce the number of papers they produced aimed at >> transferring knowledge from the research base and concentrate on > outputs >> targeted at high-impact journals in the research-base core? They >> would expect by doing so to avoid dilution of their citation average. > > This would be faulty reasoning on the part of researchers, if there > were > a continuous, multi-metric equation in place, with its weights being > dynamically updated under peer scrutiny to detect and weight exactly > this sort of practise! > > If applications are valued in a field, add application metrics: Are > certain journals more applications oriented? Crank up their weight! Is > it better to partition citations into basic vs. applied journals, with > differential weights for citations in the one and the other in certain > fields? Do so. Don't just think of a univariate measure (citations, > or h-index) and how authors might bias that measure by altering the > kind > of journals they publish in, or the number of articles they submit for > assessment! Think multivariately, dynamically, and openly. > > New applications metrics, besides journal types, might include > downloads, or even (if possible) industrial IP downloads; patents are > also metrics. Depending on the field, there will no doubt be other > measurable, monitorable performance indicators for applications impact > (and for teaching impact too!). > > It's not all about ways to bias one single citation metric, but about > developing richer metrics. If the worry is about encouraging > technology > transfer and applications flow, find objective measures of it and plug > it > into the equation. Don't treat it as just a default bias, to be > minimized > by cutting down on metrics! > >> The net effect could be to reduce the UK's volume of less frequently >> cited papers, but also to reduce information flow to the people who >> turn research into practice. > > This is again univariate thinking. Yes, citation counts are important, > but there are citations and citations. Basic citations, applied > citations. Basic publications, applied publications. Not only do > fields > have to compare like with like, but their preferred blends can be > weighted and rewarded accordingly. > > (This, by the way, is not "biasing", any more than mandating and > rewarding > publication itself is biasing: it is providing incentives for the kind > of research performance we want, and that we want to reward. > Continuous > multivariate OA metrics allow preferred profiles to be rewarded and > encouraged dynamically. Cheater detection allows self-citations, > robotic > or anomalous download inflation, salami-slicing, etc. to be detected, > exposed and penalized. Metrics are not ends in themselves, they are > merely objective performance correlates. They are easy to abuse > singly, > but much harder to abuse jointly, and in the open.) > > The UK's RAE is unique; so is its new conversion to metrics. The UK is > hence leading the world in research metrics. Don't think cravenly in > terms of how the UK will stack up in terms of existing, unvalidated, > univariate metrics. Think in terms of establishing metric standards > for > the entire world research community in the metric OA era! > > Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) > Mandated online RAE CVs Linked to University Eprint > Archives: Improving the UK Research Assessment Exercise > whilst making it cheaper and easier. Ariadne 35. > http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm > > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open > Research Web: A Preview of the Optimal and the Inevitable, in > Jacobs, > N., Eds. Open Access: Key Strategic, Technical and Economic > Aspects, > chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/ > > Harnad, S. (2007) Open Access Scientometrics and the UK Research > Assessment Exercise. In Proceedings of 11th Annual Meeting of the > International Society for Scientometrics and Informetrics 11(1), > pp. > 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. > http://eprints.ecs.soton.ac.uk/13804/ > > Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and > Swan, A. (2007) Incentivizing the Open Access Research Web: > Publication-Archiving, Data-Archiving and Scientometrics. CTWatch > Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ > > Stevan Harnad > >> Jonathan Adams >> >> Director, Evidence Ltd >> + 44 113 384 5680 >> >> Comment on: "Bibliometrics could distort research assessment" >> Guardian Education, Friday 9 November 2007 >> http://education.guardian.co.uk/RAE/story/0,,2207678,00.html >> >> Yes, any system (including democracy, health care, welfare, taxation, >> market economics, justice, education and the Internet) can be abused. >> But >> abuses can be detected, exposed and punished, and this is especially >> true in the case of scholarly/scientific research, where "peer >> review" >> does not stop with publication, but continues for as long as research >> findings are read and used. And it's truer still if it is all online > and >> openly accessible. >> >> The researcher who thinks his research impact can be spuriously > enhanced >> by producing many small, "salami-sliced" publications instead of >> fewer >> substantial ones will stand out against peers who publish fewer, more >> substantial papers. Paper lengths and numbers are metrics too, hence >> they too can be part of the metric equation. And if most or all peers > do >> salami-slicing, then it becomes a scale factor that can be factored > out >> (and the metric equation and its payoffs can be adjusted to >> discourage >> it). >> >> Citations inflated by self-citations or co-author group citations can >> also be detected and weighted accordingly. Robotically inflated > download >> metrics are also detectable, nameable and shameable. Plagiarism is >> detectable too, when all full-text content is accessible online. >> >> The important thing is to get all these publications as well as their >> metrics out in the open for scrutiny by making them Open Access. Then >> peer and public scrutiny -- plus the analytic power of the algorithms >> and the Internet -- can collaborate to keep them honest. >> From harnad at ECS.SOTON.AC.UK Fri Nov 16 09:27:08 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 16 Nov 2007 09:27:08 -0500 Subject: Continuous multi-metric research assessment In-Reply-To: <003801c8284d$e39d7880$6502a8c0@loet> Message-ID: There is no contradiction or conflict between generating open, continuous metrics and using them to measure and reward research performance, continuously. Yes, every formula can be abused. But abuses can be detected -- especially in the form of anomalous profiles within a multivariate formula. A univariate metric is far easier to abuse than a profile of inter-metric relations, both within a single author and across authors in a field. Abuses can be penalized and formulas can be adjusted. And open scrutiny is itself a deterrent to cheating and manipulation, especially for academics. -- SH On 16-Nov-07, at 7:40 AM, Loet Leydesdorff wrote: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > If I correctly remember, the Leiden normalization implies that one > compares > the citation scores with the expected citation scores given the > publication > profile of a group. You are right that a game follows naturally: if > one > publishes in journals below one's level, one can expect to obtain a > higher > than expected citation score. Since all distributions are skewed, this > effect would be reinforced. > > Hitherto, this has not been a major problem because the scores > where not > directly related as input to funding. > > With best wishes, > > > Loet > >> -----Original Message----- >> From: ASIS&T Special Interest Group on Metrics >> [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Jonathan Adams >> Sent: Friday, November 16, 2007 1:26 PM >> To: SIGMETRICS at LISTSERV.UTK.EDU >> Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The >> Babblarazzi Are At It Again... >> >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> http://web.utk.edu/~gwhitney/sigmetrics.html >> >> I'm not planning this. I understand the recommendation is from our >> colleagues at Leiden, in a report to HEFCE that will be made public >> later this month. >> So far as outputs in Thomson-indexed journals go, I think >> it's feasible >> (and we have been analysing some scenarios using earlier data >> reconciliation and analyses we did for HEFCE) but I wouldn't >> recommend >> it because of the games playing that I suspect would ensue. >> >> Jonathan Adams >> >> Director, Evidence Ltd >> + 44 113 384 5680 >> >> >> -----Original Message----- >> From: ASIS&T Special Interest Group on Metrics >> [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Loet Leydesdorff >> Sent: 16 November 2007 12:05 >> To: SIGMETRICS at listserv.utk.edu >> Subject: Re: [SIGMETRICS] "Bibliometric Distortion": The >> Babblarazzi Are >> At It Again... >> >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> http://web.utk.edu/~gwhitney/sigmetrics.html >> >>> The proposal >>> is that post-2008 the metrics assessment would be of all output, >>> creating a profile and then deriving a metric derived from that. >> >> Dear Jonathan, >> >> How are you planning to do this? Interesting. >> >> Best wishes, >> >> >> Loet >> From loet at LEYDESDORFF.NET Fri Nov 16 13:47:41 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 16 Nov 2007 19:47:41 +0100 Subject: "Bibliometric Distortion": The Babblarazzi Are At It Again... In-Reply-To: <81A4EE9059B88C4CBCB7F80231E065463FAF08@evidence1.Evidence.local> Message-ID: > I recall the earlier debate. I hear what you're saying, but disagree > with the disconnect you imply ('the model is not specified'). > An expert group knows very well the relationship between > measurement via > weighting to model. Researchers are such an expert group, as are > financial market makers. > But your point on THES rankings is well taken. Excellence > does not come > cheap. The amount that Harvard earned on its endowment fund last year > ($5.7 Billion growth) is equivalent to 50 per cent of the entire HEFCE > grant to all English universities in the same period! > Sincere regards, > > Jonathan Adams Yes, everybody entertains a model. However, there is a difference between models which facilitate and legitimate managerial and political decision-making and predictive models. How much of the variance in publication rates can be explained in terms of past citation rates and how much in terms of prior funding? Steven Harnad's proposes to make the ranking the predicted variable. :-) There is literature about sexism and nepotism in peer review (Wenneras and Wold, 1997). The advantage of this model would be that the predicting variables can be manipulated by the academics to a certain extent. Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ From Jessica.Shepherd at GUARDIAN.CO.UK Sat Nov 17 01:01:14 2007 From: Jessica.Shepherd at GUARDIAN.CO.UK (Jessica Shepherd) Date: Sat, 17 Nov 2007 06:01:14 +0000 Subject: Jessica Shepherd/Guardian/GNL is out of the office. Message-ID: I will be out of the office starting 17/11/2007 and will not return until 24/11/2007. I will be in China from November 17 until November 24. I will be checking my emails. For any urgent messages, please contact Sharon Bainbridge on 020 72399943 or Stephanie Kerstein on 020 7239 9559. Many thanks. Jessica ------------------------------------------------------------------ Visit Guardian Unlimited - the UK's most popular newspaper website http://guardian.co.uk http://observer.co.uk ------------------------------------------------------------------ The Newspaper Marketing Agency Opening Up Newspapers http://www.nmauk.co.uk ------------------------------------------------------------------ This e-mail and all attachments are confidential and may also be privileged. If you are not the named recipient, please notify the sender and delete the e-mail and all attachments immediately. Do not disclose the contents to another person. You may not use the information for any purpose, or store, or copy, it in any way. Guardian News & Media Limited is not liable for any computer viruses or other material transmitted with or as part of this e-mail. You should employ virus checking software. Guardian News & Media Limited A member of Guardian Media Group PLC Registered Office Number 1 Scott Place, Manchester M3 3GG Registered in England Number 908396 From harnad at ECS.SOTON.AC.UK Sun Nov 18 08:45:39 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 18 Nov 2007 13:45:39 +0000 Subject: Reminder: 9th Dec deadline for Open Repositories 2008 CFP (fwd) Message-ID: ** Apologies for Cross-Posting ** ---------- Forwarded message ---------- Date: Sun, 18 Nov 2007 09:46:27 +0000 From: Leslie Carr To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM AT LISTSERVER.SIGMAXI.ORG Subject: Reminder: 9th Dec deadline for Open Repositories 2008 CFP OPEN REPOSITORIES 2008: Deadline 9th Dec 2007 for Papers & Panels (Calls for Posters and User Group Participation to follow later) http://www.openrepositories.org/2008 We invite developers, researchers and practitioners to submit papers describing novel experiences or developments in the construction and use of digital repositories. Submissions of UP TO 4 pages in length are requested for review. See the CFP page at the conference site for submission instructions. Submissions for panel discussions are also requested. Repositories are being deployed in a variety of settings (research, scholarship, learning, science, cultural heritage) and across a range of scales (subject, national, regional, institutional, project, lab, personal). The aim of this conference is to address the technical, managerial, practical and theoretical issues that arise from diverse applications of repositories in the increasingly pervasive information environment. A programme of papers, panel discussions, poster presentations, workshops, tutorials and developer coding sessions will bring together all the key stakeholders in the field. Open source software community meetings for the major platforms (EPrints, DSpace and Fedora) will also provide opportunities to advance and co-ordinate the development of repository installations across the world. IMPORTANT DATES AND CONTACT INFO Paper Submission Deadline: Friday 7th December 2007 Notification of Acceptance: Monday January 21st 2008 Submission of DSpace/EPrints/Fedora User Group Presentations: TBA Submission of Posters: Monday 4th February 2008 Conference: April 1-4, 2008. University of Southampton, UK. Enquiries to: Program Committee Chair (e.lyon AT ukoln.ac.uk) or General Chair (lac AT ecs.soton.ac.uk) CONFERENCE THEMES ==================== The themes of the conference include (but are not limited to) the following: TRANSFORMATIONAL CHANGE IN THE KNOWLEDGE WORKPLACE - Embedding repositories in business processes and individual workflow. - Change Management - Advocacy and Culture Change - Policy development and policy lag. PROFESSIONALISM AND PRACTICE - Professional Development - Workforce Capacity - Skills and Training - Roles and Responsibilities SUSTAINABILITY - Economic sustainability and new business models, - Technical sustainability of a repository over time, including platform change and migration. - Technical sustainability of holdings over time. Preservation. Audit, certification. Trust. Assessment tools. - Managing sustainability failure - when a repository outlives its organisation or its organisational commitment. LEGAL ISSUES - Embargoes - Licensing and Digital Rights Management - Mandates - Overcoming legislative barriers - Contractual relationships - facilitating and monitoring - International and cross-border issues SUCCESSFUL INTEROPERABILITY - Content standards - discipline-specific vs general - Metadata standards and application profiles - Quality standards and quality control processes - Achieving interchange in multi-disciplinary or multi-institutional environments - Semantic web and linked data - Identifier management for data and real world resources - Access and authentication MODELS, ARCHITECTURES AND FRAMEWORKS - Beyond OAIS - Federations - Institutional Models - uber- or multi-repository environments - Adapting to changing e-infrastructure: SOA, services, cloud computing - Scalability VALUE CHAINS and SCHOLARLY COMMUNICATIONS - Multi-stakeholder value: preservation, open access, research, management, admninistratiion - Multi-agenda, multi-function, multi-purpose repositories - Usefulness and usability - Reference, reuse, reanalysis and repurposing of content - Citation of data / learning objects - Changes in scholarly practice - New benchmarks for scholarly success - Repository metrics - Bibliometrics: usage and impact SERVICES BUILT ON REPOSITORIES - OAI services - User-oriented services - Mashups - Social networking - Commentary / tagging - Searching / information discovery - Alerting - Mining - Visualisation - Integration with Second life and Virtual environments USE CASES FOR REPOSITORIES - E-research/E-science (e.g., data and publication; collaborative services) - E-scholarship - Institutional repositories - Discipline-oriented repositories - Open Access - Scholarly Publishing - Digital Library - Cultural Heritage - Scientific repositories / data repositories - Interdisciplinary, cross-disciplinary and cross-sectoral repositories From krobin at JHMI.EDU Mon Nov 19 14:11:35 2007 From: krobin at JHMI.EDU (Karen Robinson) Date: Mon, 19 Nov 2007 14:11:35 -0500 Subject: Frederick Sachs "Is the NIH budget saturated? Why hasn't more funding meant more publications?" In-Reply-To: <001301c82881$2cdecbb0$6502a8c0@loet> Message-ID: FYI Article I came across in browsing... http://www.the-scientist.com/news/home/53580/ -- Karen A. Robinson Internal Medicine and Health Sciences Informatics, Medicine Johns Hopkins University 1830 East Monument Street, Room 8069 Baltimore, MD 21287 410-502-9216 (voice) 410-955-0825 (fax) krobin at jhmi.edu From A.Chiner-Arias at WARWICK.AC.UK Tue Nov 20 05:56:27 2007 From: A.Chiner-Arias at WARWICK.AC.UK (Chiner Arias, Alejandro) Date: Tue, 20 Nov 2007 10:56:27 -0000 Subject: University Institutional Repository impact on citation of journal articles Message-ID: Does article self-archiving in an Institutional Repository increase citation of the articles that are later published in peer-reviewed scholarly journals? The literature I am trying to find should provide empirical evidence to answer this question and should be specifically about self-archiving in Institutional Repositories. I am aware of the following bibliography and I know there are plenty of studies about the citation impact of Open Access in general, including OA journals and cross-institutional or subject repositories like arXiv. I am also aware of studies about the impact of OAI searchable archiving. All of which I find cogent and I do not need to be persuaded. http://opcit.eprints.org/oacitation-biblio.html Unfortunately the above is not enough for my work. I need something specifically about Institutional Repositories, understood as a university's green OA archive for the research by its academic staff. Please can I ask from the list if you know of any studies along these lines? Many thanks for you help. Alejandro ___________________________________ Alejandro Chiner, Service Innovation Officer, University of Warwick Library Research & Innovation Unit, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom. Tel: +(44/0) 24 765 23251, Fax: +(44/0) 24 765 24211, a.chiner-arias at warwick.ac.uk http://www.warwick.ac.uk/go/riu ___________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgsloan2 at YAHOO.COM Tue Nov 20 11:25:01 2007 From: bgsloan2 at YAHOO.COM (B.G. Sloan) Date: Tue, 20 Nov 2007 08:25:01 -0800 Subject: Qualitative citation analysis? Message-ID: A discussion on the liblicense list reminded me of something I asked about a couple of years ago in another forum...just curious if anyone on SIGMETRICS can point to some recent relevant studies... Most of the citation analysis studies I see nowadays involve quantitative analyses for the most part. Just wondering if many people are into studying citations from a qualitative standpoint? For example, in a lot of studies a citation is a citation is a citation, with little concern for how a given paper was cited qualitatively within the context of the citing paper. For example, an author could cite a paper very positively, or the citation could be pretty much value-neutral, or the citation could be negative. But in a quantitative analysis these various types of citations pretty much all carry the same weight. When I looked into this several years ago, a number of people alerted me to some qualitative citation studies. The interesting thing is that most of these studies were maybe 20 years old, at least. It almost seemed like people got away from doing qualitative citation analyses as it got easier to do quantitative analyses, i.e., as databases such as the ISI indices became available in electronic form. Anyway, I am interested in hearing about relatively recent qualitative citation analysis. Thanks, Bernie Sloan --------------------------------- Get easy, one-click access to your favorites. Make Yahoo! your homepage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmedina at UIUC.EDU Tue Nov 20 15:05:04 2007 From: kmedina at UIUC.EDU (Karen Medina) Date: Tue, 20 Nov 2007 14:05:04 -0600 Subject: Qualitative citation analysis? Message-ID: Bernie wrote: >I am interested in hearing about relatively > recent qualitative citation analysis. Dear Bernie, One paper you might be interested in is a study of how Gerard Salton was mis-cited and mis-interpreted for years (Dubin, (2004). The Most Influential Paper Gerard Salton Never Wrote. Library Trends. http://www.ideals.uiuc.edu/bitstream/2142/1697/2/Dubin748764.pdf ) It is an interesting question you ask, and I think it has a complex answer. As a new person to bibliometrics, let me try to join the conversation early so that I can be corrected. First, I think I'll ask what it is that qualitative analysis adds to citation studies? The greatest thing I think it adds is context that can be used for document retrieval systems (I think several people have mentioned this, Garfield for instance). Today, if you look at ISI, CiteSeer, and many Indexing and Abstracting software, you will notice that more and more of them are retrieving the context of the citation and presenting it to the user. So, in a way, systems are allowing the user to do the qualitative analyses that interest you. I'll take on your point that the negative citations are different than positive citations. If what we are measuring is a impact on a field, then an author or paper that is negatively cited is still impacting the field. We have learned that negative citations tend to take more text space in the citing document. Perhaps we have more to learn about negative citations, but we have to critically evaluate what we want to measure. Henry Small's 1978 paper, Cited Documents as Concept Symbols, summed up what had been happening with qualititative studies -- that some people were interested in the motivation for citing, while others were wanting to give some value judgement to the citation (calling some citations perfunctory others organic). Motivation is really hard to judge. Some thought that an outsider was a better judge -- more objective. Some thought that an expert in the field was a better judge of motivation. But it seemed that each study developed a different classification scheme. Personally, I think the motivation behind citation behavior can best be judged by the author(s) of the citing document. Self-citation and cronie citations are not as wide-spread as some poople thought, but the scientific community as a whole is aware of how the practice of such citations could inflate prestigue temporarily. You mention that quantitative studies treat all citations as equal. Well, if we are measuring impact on a field or prestigue, to some degree, a citation is a citation is a citation. Citation context does have a lot of potential, but it takes a lot of work to analyze well. Systems are making it easier, and there are papers out there that are reporting on it. But as systems make qualitative studies easier, perhaps, they are decreasing the need for us to do such studies. They are already implementing what we would be proving. -karen medina student ---- Original message ---- >Date: Tue, 20 Nov 2007 08:25:01 -0800 >From: "B.G. Sloan" >Subject: [SIGMETRICS] Qualitative citation analysis? >To: SIGMETRICS at LISTSERV.UTK.EDU > > A discussion on the liblicense list reminded me of > something I asked about a couple of years ago in > another forum...just curious if anyone on SIGMETRICS > can point to some recent relevant studies... > Most of the citation analysis studies I see nowadays > involve quantitative analyses for the most part. > Just wondering if many people are into studying > citations from a qualitative standpoint? For > example, in a lot of studies a citation is a > citation is a citation, with little concern for how > a given paper was cited qualitatively within the > context of the citing paper. For example, an author > could cite a paper very positively, or the citation > could be pretty much value-neutral, or the citation > could be negative. But in a quantitative analysis > these various types of citations pretty much all > carry the same weight. > > When I looked into this several years ago, a number > of people alerted me to some qualitative citation > studies. The interesting thing is that most of these > studies were maybe 20 years old, at least. It almost > seemed like people got away from doing qualitative > citation analyses as it got easier to do > quantitative analyses, i.e., as databases such as > the ISI indices became available in electronic form. > > Anyway, I am interested in hearing about relatively > recent qualitative citation analysis. > > Thanks, > > Bernie Sloan > > > > ------------------------------------------------ > > Get easy, one-click access to your favorites. Make > Yahoo! your homepage. From lutz.bornmann at GESS.ETHZ.CH Wed Nov 21 02:51:26 2007 From: lutz.bornmann at GESS.ETHZ.CH (Bornmann Lutz) Date: Wed, 21 Nov 2007 08:51:26 +0100 Subject: AW: [SIGMETRICS] Qualitative citation analysis? Message-ID: Dear Bernie, Our paper entitled "What do citation counts measure?" might be of interest to you. It is a review of studies on citing behavior that is accepted for publication in the Journal of Documentation. You can download the paper from my personal homepage: www.lutz-bornmann.de/Publications.htm Kind regards Lutz ----------------------------------------------------------------------------- Dr. Lutz Bornmann ETH Zurich, D-GESS Professorship for Social Psychology and Research on Higher Education Zaehringerstr. 24 / ZAE CH-8092 Zurich Phone: 0041 44 632 48 25 Fax: 0041 44 632 12 83 http://www.psh.ethz.ch/index_EN bornmann at gess.ethz.ch Download of publications: www.lutz-bornmann.de/Publications.htm ________________________________ Von: ASIS&T Special Interest Group on Metrics im Auftrag von Karen Medina Gesendet: Di 20.11.2007 21:05 An: SIGMETRICS at LISTSERV.UTK.EDU Betreff: Re: [SIGMETRICS] Qualitative citation analysis? Bernie wrote: >I am interested in hearing about relatively > recent qualitative citation analysis. Dear Bernie, One paper you might be interested in is a study of how Gerard Salton was mis-cited and mis-interpreted for years (Dubin, (2004). The Most Influential Paper Gerard Salton Never Wrote. Library Trends. http://www.ideals.uiuc.edu/bitstream/2142/1697/2/Dubin748764.pdf ) It is an interesting question you ask, and I think it has a complex answer. As a new person to bibliometrics, let me try to join the conversation early so that I can be corrected. First, I think I'll ask what it is that qualitative analysis adds to citation studies? The greatest thing I think it adds is context that can be used for document retrieval systems (I think several people have mentioned this, Garfield for instance). Today, if you look at ISI, CiteSeer, and many Indexing and Abstracting software, you will notice that more and more of them are retrieving the context of the citation and presenting it to the user. So, in a way, systems are allowing the user to do the qualitative analyses that interest you. I'll take on your point that the negative citations are different than positive citations. If what we are measuring is a impact on a field, then an author or paper that is negatively cited is still impacting the field. We have learned that negative citations tend to take more text space in the citing document. Perhaps we have more to learn about negative citations, but we have to critically evaluate what we want to measure. Henry Small's 1978 paper, Cited Documents as Concept Symbols, summed up what had been happening with qualititative studies -- that some people were interested in the motivation for citing, while others were wanting to give some value judgement to the citation (calling some citations perfunctory others organic). Motivation is really hard to judge. Some thought that an outsider was a better judge -- more objective. Some thought that an expert in the field was a better judge of motivation. But it seemed that each study developed a different classification scheme. Personally, I think the motivation behind citation behavior can best be judged by the author(s) of the citing document. Self-citation and cronie citations are not as wide-spread as some poople thought, but the scientific community as a whole is aware of how the practice of such citations could inflate prestigue temporarily. You mention that quantitative studies treat all citations as equal. Well, if we are measuring impact on a field or prestigue, to some degree, a citation is a citation is a citation. Citation context does have a lot of potential, but it takes a lot of work to analyze well. Systems are making it easier, and there are papers out there that are reporting on it. But as systems make qualitative studies easier, perhaps, they are decreasing the need for us to do such studies. They are already implementing what we would be proving. -karen medina student ---- Original message ---- >Date: Tue, 20 Nov 2007 08:25:01 -0800 >From: "B.G. Sloan" >Subject: [SIGMETRICS] Qualitative citation analysis? >To: SIGMETRICS at LISTSERV.UTK.EDU > > A discussion on the liblicense list reminded me of > something I asked about a couple of years ago in > another forum...just curious if anyone on SIGMETRICS > can point to some recent relevant studies... > Most of the citation analysis studies I see nowadays > involve quantitative analyses for the most part. > Just wondering if many people are into studying > citations from a qualitative standpoint? For > example, in a lot of studies a citation is a > citation is a citation, with little concern for how > a given paper was cited qualitatively within the > context of the citing paper. For example, an author > could cite a paper very positively, or the citation > could be pretty much value-neutral, or the citation > could be negative. But in a quantitative analysis > these various types of citations pretty much all > carry the same weight. > > When I looked into this several years ago, a number > of people alerted me to some qualitative citation > studies. The interesting thing is that most of these > studies were maybe 20 years old, at least. It almost > seemed like people got away from doing qualitative > citation analyses as it got easier to do > quantitative analyses, i.e., as databases such as > the ISI indices became available in electronic form. > > Anyway, I am interested in hearing about relatively > recent qualitative citation analysis. > > Thanks, > > Bernie Sloan > > > > ------------------------------------------------ > > Get easy, one-click access to your favorites. Make > Yahoo! your homepage. From havemanf at CMS.HU-BERLIN.DE Wed Nov 21 07:07:47 2007 From: havemanf at CMS.HU-BERLIN.DE (Frank Havemann) Date: Wed, 21 Nov 2007 13:07:47 +0100 Subject: Looking for Shockley 1957 paper Message-ID: Does anyone have a scanned PDF version of this Nobel price winner paper about publication analysis? @article{shockley1957siv, title={{On the Statistics of Individual Variations of Productivity in Research Laboratories}}, author={Shockley, W.}, journal={Proceedings of the IRE}, volume={45}, number={3}, pages={279--290}, year={1957} } Thank you Frank Havemann *************************** Dr. Frank Havemann Department of Library and Information Science Humboldt University Dorotheenstr. 26 D-10099 Berlin Germany tel.: (0049) (030) 2093 4228 http://www.ib.hu-berlin.de/inf/havemann.html From j.hartley at PSY.KEELE.AC.UK Wed Nov 21 10:38:20 2007 From: j.hartley at PSY.KEELE.AC.UK (James Hartley) Date: Wed, 21 Nov 2007 15:38:20 -0000 Subject: Thanks Message-ID: Many thanks to those members of the list who completed my questionnaire for me on the readability of abstracts. Your help is much appreciated. If anyone else would like to have a go the site is still running at http://www.keele.ac.uk/depts/ps/jim07/abs2007.htm Thanks James Hartley School of Psychology Keele University Staffordshire ST5 5BG UK j.hartley at psy.keele.ac.uk http://www.keele.ac.uk/depts/ps/jhabiog.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From whitehd at DREXEL.EDU Wed Nov 21 12:59:30 2007 From: whitehd at DREXEL.EDU (Howard White) Date: Wed, 21 Nov 2007 12:59:30 -0500 Subject: Qualitative citation analysis? In-Reply-To: <984508.34177.qm@web57115.mail.re3.yahoo.com> Message-ID: Hi, Bernie and other contributors, I use algorithms and statistics in my citation research, but I think of it as basically qualitative in nature. A lot of it expands the insights of pioneers such as Gene Garfield and Henry Small. I've put a selection of my articles from the last 10 years below. For qualitative purposes, I'd particularly recommend "Authors as Citers over Time," "Citation Analysis and Discourse Analysis Revisited," "Reward, Persuasion, and the Sokal Hoax," and "Toward Ego-Centered Citation Analysis, " all in readily available sources. --Howard White White, Howard D. (2007). Combining Bibliometrics, Information Retrieval, and Relevance Theory: Part 1. First Examples of a Synthesis. Journal of the American Society for Information Science and Technology 58: 536-559 White, Howard D. (2007). Combining Bibliometrics, Information Retrieval, and Relevance Theory: Part 2. Implications for Information Science. Journal of the American Society for Information Science and Technology 58: 583-605. White, Howard D. (2005). On Extending Informetrics: An Opinion Paper. Proceedings of ISSI 2005, the 10th International Conference of the International Society for Scientometrics and Informetrics. Stockholm, Sweden: Karolinska University Press. Vol. 2: 442-449. White, Howard D. (2004). Reward, Persuasion, and the Sokal Hoax: A Study in Citation Identities. Scientometrics 60: 93-120 White, Howard D. (2004). Citation Analysis and Discourse Analysis Revisited. Applied Linguistics 25: 89-116. White, Howard D., Barry Wellman, and Nancy Nazer. (2004). Does Citation Reflect Social Structure? Longitudinal Evidence from the ?Globenet? Interdisciplinary Research Group. Journal of the American Society for Information Science and Technology 55: 111-126. White, Howard D, Xia Lin, Jan W. Buzydlowski, Chaomei Chen. (2004). User-Controlled Mapping of Significant Literatures. Proceedings of the National Academy of Sciences 101 (suppl. 1), April 6, 2004. 5297-5302. White, Howard D. (2003). Citation Communities. In Encyclopedia of Community; From the Village to the Virtual World, Karen Christensen and David Levinson, eds. Thousand Oaks, CA: Sage. v.1: 141-143. White, Howard D. (2003). Pathfinder Networks and Author Cocitation Analysis: A Remapping of Paradigmatic Information Scientists. Journal of the American Society for Information Science and Technology 54: 423-434. White, Howard D. (2001). Author-Centered Bibliometrics through CAMEOs: Characterizations Automatically Made and Edited Online. Scientometrics 51: 607-637. White, Howard D. (2001). Authors as Citers over Time. Journal of the American Society for Information Science 52: 87-108. White, Howard D., Xia Lin, Jan Buzydlowski. (2001). The Endless Gallery: Visualizing Authors? Citation Images in the Humanities. Proceedings of the Annual Meeting of the American Society of Information Science and Technology, v. 38. Medford, NJ: Information Today. 182-189. White, Howard D. (2000). Toward Ego-Centered Citation Analysis. In The Web of Knowledge: A Festschrift in Honor of Eugene Garfield, Blaise Cronin and Helen Barsky Atkins, eds. Medford, NJ: Information Today (ASIS Monograph Series). 475-496. White, Howard D., and Katherine W. McCain. (1998). Visualizing a Discipline: An Author Co-citation Analysis of Information Science, 1972?1995. Journal of the American Society for Information Science 49: 327-355. [Winner of Best JASIST Paper Award for 1998.] From rhill at ASIS.ORG Wed Nov 21 13:06:29 2007 From: rhill at ASIS.ORG (Richard Hill) Date: Wed, 21 Nov 2007 13:06:29 -0500 Subject: Interdisciplinary Faculty Appointment in Healthcare Informatics Message-ID: [Posted by request. Dick Hill] Drexel University Interdisciplinary Faculty Appointment in Healthcare Informatics Drexel University's College of Information Science and Technology, College of Nursing and Health Professions, School of Public Health, and College of Medicine are seeking a tenure-track faculty member, at any level, for a joint appointment. We are seeking candidates in the broad area of healthcare informatics. As part of a university-wide healthcare informatics initiative, we are especially interested in applicants who either complement or strengthen existing interdisciplinary activities in this area. We are particularly interested in applicants who may have expertise in one or more of the following areas: health literacy, electronic medical records or health information management. Responsibilities include a contribution to teaching in one or more of the Colleges and Schools, development of a strong program of fundable research, and collaboration with colleagues across Drexel University to develop interdisciplinary education and research programs in healthcare informatics. Qualifications include an earned doctorate in Information Science, Nursing, Public Health, Medicine or a closely related field with a demonstrated commitment to research and teaching. Candidates for a senior position should have an established research record and success in obtaining external research funding. Formal experience in the health sciences industry is a plus. An understanding of community-based medicine is also of value. Salary and rank will be based on qualifications and experience. The position is available immediately, and the search will continue until the position is filled. Drexel is a privately endowed technology university founded in 1891. With approximately 16,000 students, it has one of the largest undergraduate cooperative education programs in the nation, with formal relationships in place with over 1,500 local, national and multinational companies. Drexel is located on Philadelphia's Avenue of Technology in University City and at the hub of the academic, cultural and historical resources of the nation's fifth largest metropolitan region. Philadelphia is also the midpoint of a mid-Atlantic technology corridor that stretches from New York City (100 miles north) to Washington, D.C. (135 miles south). Send letter of application, curriculum vita and contact information for three references to: Dr. Denise E. Agosto, Chair IST Search Committee The iSchool at Drexel Drexel University 3141 Chestnut Street Philadelphia, PA 19104 E-mail: faculty-search at ischool.drexel.edu _____ Richard B. Hill Executive Director American Society for Information Science and Technology 1320 Fenwick Lane, Suite 510 Silver Spring, MD 20910 Fax: (301) 495-0810 Voice: (301) 495-0900 From bgsloan2 at YAHOO.COM Wed Nov 21 14:54:24 2007 From: bgsloan2 at YAHOO.COM (B.G. Sloan) Date: Wed, 21 Nov 2007 11:54:24 -0800 Subject: AW: [SIGMETRICS] Qualitative citation analysis? In-Reply-To: <75BF4A0763D78D42A50F3A15584302A3013642CF@EX0.d.ethz.ch> Message-ID: Thanks! Bornmann Lutz wrote: Adminstrative info for SIGMETRICS (for example unsubscribe): http://web.utk.edu/~gwhitney/sigmetrics.html Dear Bernie, Our paper entitled "What do citation counts measure?" might be of interest to you. It is a review of studies on citing behavior that is accepted for publication in the Journal of Documentation. You can download the paper from my personal homepage: www.lutz-bornmann.de/Publications.htm Kind regards Lutz ----------------------------------------------------------------------------- Dr. Lutz Bornmann ETH Zurich, D-GESS Professorship for Social Psychology and Research on Higher Education Zaehringerstr. 24 / ZAE CH-8092 Zurich Phone: 0041 44 632 48 25 Fax: 0041 44 632 12 83 http://www.psh.ethz.ch/index_EN bornmann at gess.ethz.ch Download of publications: www.lutz-bornmann.de/Publications.htm ________________________________ Von: ASIS&T Special Interest Group on Metrics im Auftrag von Karen Medina Gesendet: Di 20.11.2007 21:05 An: SIGMETRICS at LISTSERV.UTK.EDU Betreff: Re: [SIGMETRICS] Qualitative citation analysis? Bernie wrote: >I am interested in hearing about relatively > recent qualitative citation analysis. Dear Bernie, One paper you might be interested in is a study of how Gerard Salton was mis-cited and mis-interpreted for years (Dubin, (2004). The Most Influential Paper Gerard Salton Never Wrote. Library Trends. http://www.ideals.uiuc.edu/bitstream/2142/1697/2/Dubin748764.pdf ) It is an interesting question you ask, and I think it has a complex answer. As a new person to bibliometrics, let me try to join the conversation early so that I can be corrected. First, I think I'll ask what it is that qualitative analysis adds to citation studies? The greatest thing I think it adds is context that can be used for document retrieval systems (I think several people have mentioned this, Garfield for instance). Today, if you look at ISI, CiteSeer, and many Indexing and Abstracting software, you will notice that more and more of them are retrieving the context of the citation and presenting it to the user. So, in a way, systems are allowing the user to do the qualitative analyses that interest you. I'll take on your point that the negative citations are different than positive citations. If what we are measuring is a impact on a field, then an author or paper that is negatively cited is still impacting the field. We have learned that negative citations tend to take more text space in the citing document. Perhaps we have more to learn about negative citations, but we have to critically evaluate what we want to measure. Henry Small's 1978 paper, Cited Documents as Concept Symbols, summed up what had been happening with qualititative studies -- that some people were interested in the motivation for citing, while others were wanting to give some value judgement to the citation (calling some citations perfunctory others organic). Motivation is really hard to judge. Some thought that an outsider was a better judge -- more objective. Some thought that an expert in the field was a better judge of motivation. But it seemed that each study developed a different classification scheme. Personally, I think the motivation behind citation behavior can best be judged by the author(s) of the citing document. Self-citation and cronie citations are not as wide-spread as some poople thought, but the scientific community as a whole is aware of how the practice of such citations could inflate prestigue temporarily. You mention that quantitative studies treat all citations as equal. Well, if we are measuring impact on a field or prestigue, to some degree, a citation is a citation is a citation. Citation context does have a lot of potential, but it takes a lot of work to analyze well. Systems are making it easier, and there are papers out there that are reporting on it. But as systems make qualitative studies easier, perhaps, they are decreasing the need for us to do such studies. They are already implementing what we would be proving. -karen medina student ---- Original message ---- >Date: Tue, 20 Nov 2007 08:25:01 -0800 >From: "B.G. Sloan" >Subject: [SIGMETRICS] Qualitative citation analysis? >To: SIGMETRICS at LISTSERV.UTK.EDU > > A discussion on the liblicense list reminded me of > something I asked about a couple of years ago in > another forum...just curious if anyone on SIGMETRICS > can point to some recent relevant studies... > Most of the citation analysis studies I see nowadays > involve quantitative analyses for the most part. > Just wondering if many people are into studying > citations from a qualitative standpoint? For > example, in a lot of studies a citation is a > citation is a citation, with little concern for how > a given paper was cited qualitatively within the > context of the citing paper. For example, an author > could cite a paper very positively, or the citation > could be pretty much value-neutral, or the citation > could be negative. But in a quantitative analysis > these various types of citations pretty much all > carry the same weight. > > When I looked into this several years ago, a number > of people alerted me to some qualitative citation > studies. The interesting thing is that most of these > studies were maybe 20 years old, at least. It almost > seemed like people got away from doing qualitative > citation analyses as it got easier to do > quantitative analyses, i.e., as databases such as > the ISI indices became available in electronic form. > > Anyway, I am interested in hearing about relatively > recent qualitative citation analysis. > > Thanks, > > Bernie Sloan > > > > ------------------------------------------------ > > Get easy, one-click access to your favorites. Make > Yahoo! your homepage. --------------------------------- Never miss a thing. Make Yahoo your homepage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From harnad at ECS.SOTON.AC.UK Thu Nov 22 11:57:38 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 22 Nov 2007 16:57:38 +0000 Subject: UK Research Evaluation Framework: Validate Metrics Against Panel Rankings Message-ID: ** Cross-Posted** Fully Hyperlinked version of this posting: http://openaccess.eprints.org/index.php?/archives/333-guid.html SUMMARY: Three things need to be remedied in the UK's proposed HEFCE/RAE Research Evaluation Framework: http://www.hefce.ac.uk/pubs/hefce/2007/07_34/ (1) Ensure as broad, rich, diverse and forward-looking a battery of candidate metrics as possible -- especially online metrics -- in all disciplines. (2) Make sure to cross-validate them against the panel rankings in the last parallel panel/metric RAE in 2008. The initialized weights can then be fine-tuned and optimized by peer panels in ensuing years. (3) Stress that it is important -- indeed imperative -- that all University Institutional Repositories (IRs) now get serious about systematically archiving all their research output assets (especially publications) so they can be counted and assessed (as well as accessed!), along with their IR metrics (downloads, links, growth/decay rates, harvested citation counts, etc.). If these three things are systematically done -- (1) comprehensive metrics, (2) cross-validation and calibration of weightings, and (3) a systematic distributed IR database from which to harvest them -- continuous scientometric assessment of research will be well on its way worldwide, making research progress and impact more measurable and creditable, while at the same time accelerating and enhancing it. Once one sees the whole report, it turns out that the HEFCE/RAE Research Evaluation Framework is far better, far more flexible, and far more comprehensive than is reflected in either the press release or the Executive Summary. It appears that there is indeed the intention to use many more metrics than the three named in the executive summary (citations, funding, students), that the metrics will be weighted field by field, and that there is considerable open-mindedness about further metrics and about corrections and fine-tuning with time. Even for the humanities and social sciences, where "light touch" panel review will be retained for the time being, metrics too will be tried and tested. This is all very good, and an excellent example for other nations, such as Australia (also considering national research assessment with its Research Quality Framework), the US (not very advanced yet, but no doubt listening) and the rest of Europe (also listening, and planning measures of its own, such as EurOpenScholar). There is still one prominent omission, however, and it is a crucial one: The UK is conducting one last parallel metrics/panel RAE in 2008. That is the last and best chance to test and validate the candidate metrics -- as rich and diverse a battery of them as possible -- against the panel rankings. In all other fields of metrics -- biometrics, psychometrics, even weather forecasting metrics ? before deployment the metric predictors first need to be tested and shown to be valid, which means showing that they do indeed predict what they were intended to predict. That means they must correlate with a "criterion" metric that has already been validated, or that has "face-validity" of some kind. The RAE has been using the panel rankings for two decades now (at a great cost in wasted time and effort to the entire UK research community -- time and effort that could instead have been used to conduct the research that the RAE was evaluating: this is what the metric RAE is primarily intended to remedy). But if the panel rankings have been unquestioningly relied upon for 2 decades already, then they are a natural criterion against which the new battery of metrics can be validated, initializing the weights of each metric within a joint battery, as a function of what percentage of the variation in the panel rankings each metric can predict. This is called "multiple regression" analysis: N "predictors" are jointly correlated with one (or more) "criterion" (in this case the panel rankings, but other validated or face-valid criteria could also be added, if there were any). The result is a set of "beta" weights on each of the metrics, reflecting their individual predictive power, in predicting the criterion (panel rankings). The weights will of course differ from discipline by discipline. Now these beta weights can be taken as an initialization of the metric battery. With time, "super-light" panel oversight can be used to fine-tune and optimize those weightings (and new metrics can always be added too), to correct errors and anomalies and make them reflect the values of each discipline. (The weights can also be systematically varied to use the metrics to re-rank in terms of different blends of criteria that might be relevant for different decisions: RAE top-sliced funding is one sort of decision, but one might sometimes want to rank in terms of contributions to education, to industry, to internationality, to interdisciplinarity. Metrics can be calibrated continuously and can generate different "views" depending on what is being evaluated. But, unlike the much abused "university league table," which ranks on one metric at a time (and often a subjective opinion-based rather than an objective one), the RAE metrics could generate different views simply by changing the weights on some selected metrics, while retaining the other metrics as the baseline context and frame of reference.) To accomplish all that, however, the metric battery needs to be rich and diverse, and the weight of each metric in the battery has to be initialised in a joint multiple regression on the panel rankings. It is very much to be hoped that HEFCE will commission this all-important validation exercise on the invaluable and unprecedented database they will have with the unique, one-time parallel panel/ranking RAE in 2008. That is the main point. There are also some less central points: The report says -- a priori -- that REF will not consider journal impact factors (average citations per journal), nor author impact (average citations per author): only average citations per paper, per department. This is a mistake. In a metric battery, these other metrics can be included, to test whether they make any independent contribution to the predictivity of the battery. The same applies to author publication counts, number of publishing years, number of co-authors -- even to impact before the evaluation period. (Possibly included vs. non-included staff research output could be treated in a similar way, with number and proportion of staff included also being metrics.) The large battery of jointly validated and weighted metrics will make it possible to correct the potential bias from relying too heavily on prior funding, even if it is highly correlated with the panel rankings, in order to avoid a self-fulfilling prophecy which would simply collapse the Dual RAE/RCUK funding system into just a multiplier on prior RCUK funding. Self-citations should not be simply excluded: they should be included independently in the metric battery, for validation. So should measures of the size of the citation circle (endogamy) and degree of interdisciplinarity. Nor should the metric battery omit the newest and some of the most important metrics of all, the online, web-based ones: downloads of papers, links, growth rates, decay rates, hub/authority scores. All of these will be provided by the UK's growing network of UK Institutional Repositories. These will be the record-keepers -- for both the papers and their usage metrics -- and the access-providers, thereby maximizing their usage metrics. REF should put much, much more emphasis on ensuring that the UK network of Institutional Repositories systematically and comprehensively records its research output and its metric performance indicators. But overall, thumbs up for a promising initiative that is likely to serve as a useful model for the rest of the research world in the online era. References Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. http://eprints.ecs.soton.ac.uk/7503/ Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. http://eprints.ecs.soton.ac.uk/12130/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. http://eprints.ecs.soton.ac.uk/13804/ Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. http://eprints.ecs.soton.ac.uk/14329/ Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/ Fully Hyperlinked version of this posting: http://openaccess.eprints.org/index.php?/archives/333-guid.html Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From harnad at ECS.SOTON.AC.UK Thu Nov 22 16:15:36 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Thu, 22 Nov 2007 21:15:36 +0000 Subject: University Institutional Repository impact on citation of journal articles In-Reply-To: <6C724FA6DC66CF4C971F385DBDE88722905569@ELDER.ads.warwick.ac.uk> Message-ID: On Tue, 20 Nov 2007, Chiner Arias, Alejandro wrote: > Does article self-archiving in an Institutional Repository increase > citation of the articles that are later published in peer-reviewed > scholarly journals? Yes: Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST), 57 (8). pp. 1060-1072. http://eprints.ecs.soton.ac.uk/10713/ See also the work of Kurtz et al, and of Moed, on the "Early Advantage," in the OpCit Bibliography that you cite below. But there is an ambiguity in your question: A paper is just a paper or preprint until it is accepted for publication, and a postprint or article only after that. It is not clear whether your question is about whether preprint self-archiving increases later article citations, or whether you mean early postprint self-archiving (before the published version is available). (In all cases, OA increases impact.) > The literature I am trying to find should provide empirical evidence to > answer this question and should be specifically about self-archiving in > Institutional Repositories. Yes, but self-archiving *what*? > I am aware of the following bibliography and I know there are plenty of > studies about the citation impact of Open Access in general, including > OA journals and cross-institutional or subject repositories like arXiv. But repositories contain both preprints and postprints. > I am also aware of studies about the impact of OAI searchable archiving. > All of which I find cogent and I do not need to be persuaded. > http://opcit.eprints.org/oacitation-biblio.html > > Unfortunately the above is not enough for my work. I need something > specifically about Institutional Repositories, understood as a > university's green OA archive for the research by its academic staff. Again, some confusion. IRs are IRs. Inasmuch as they contain postprints, the are green OA archives. Preprints are optional, but they too enhance impact. Hope this helps, Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ > Please can I ask from the list if you know of any studies along these > lines? > > Many thanks for you help. > > Alejandro > > ___________________________________ > Alejandro Chiner, Service Innovation Officer, > University of Warwick Library Research & Innovation Unit, > Gibbet Hill Road, Coventry CV4 7AL, United Kingdom. Tel: +(44/0) 24 765 > 23251, Fax: +(44/0) 24 765 24211, > a.chiner-arias -- warwick.ac.uk http://www.warwick.ac.uk/go/riu > ___________________________________ > > From A.Chiner-Arias at WARWICK.AC.UK Fri Nov 23 05:59:18 2007 From: A.Chiner-Arias at WARWICK.AC.UK (Chiner Arias, Alejandro) Date: Fri, 23 Nov 2007 10:59:18 -0000 Subject: University Institutional Repository impact on citation of journal articles In-Reply-To: A Message-ID: Stevan My interest is on any study that specifically concentrates on "Institutional Repository" self-archiving. It could be an added bonus if the study is making a distinction between pre-prints and post-prints, publisher's proof copy or publisher's published, or between immediate OA and embargo with metada exposure only. For the purpose of IR advocacy, studies on "Open Access" advantage do little to persuade those who are already using central repositories outside the institution. Thank you very much for pinpointing those references. Alejandro ___________________________________ Alejandro Chiner, Service Innovation Officer, University of Warwick Library Research & Innovation Unit, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom. Tel: +(44/0) 24 765 23251, Fax: +(44/0) 24 765 24211, a.chiner-arias at warwick.ac.uk http://www.warwick.ac.uk/go/riu ___________________________________ -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad Sent: 22 November 2007 21:16 To: SIGMETRICS at listserv.utk.edu Subject: Re: [SIGMETRICS] University Institutional Repository impact on citation of journal articles On Tue, 20 Nov 2007, Chiner Arias, Alejandro wrote: > Does article self-archiving in an Institutional Repository increase > citation of the articles that are later published in peer-reviewed > scholarly journals? Yes: Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST), 57 (8). pp. 1060-1072. http://eprints.ecs.soton.ac.uk/10713/ See also the work of Kurtz et al, and of Moed, on the "Early Advantage," in the OpCit Bibliography that you cite below. But there is an ambiguity in your question: A paper is just a paper or preprint until it is accepted for publication, and a postprint or article only after that. It is not clear whether your question is about whether preprint self-archiving increases later article citations, or whether you mean early postprint self-archiving (before the published version is available). (In all cases, OA increases impact.) > The literature I am trying to find should provide empirical evidence to > answer this question and should be specifically about self-archiving in > Institutional Repositories. Yes, but self-archiving *what*? > I am aware of the following bibliography and I know there are plenty of > studies about the citation impact of Open Access in general, including > OA journals and cross-institutional or subject repositories like arXiv. But repositories contain both preprints and postprints. > I am also aware of studies about the impact of OAI searchable archiving. > All of which I find cogent and I do not need to be persuaded. > http://opcit.eprints.org/oacitation-biblio.html > > Unfortunately the above is not enough for my work. I need something > specifically about Institutional Repositories, understood as a > university's green OA archive for the research by its academic staff. Again, some confusion. IRs are IRs. Inasmuch as they contain postprints, the are green OA archives. Preprints are optional, but they too enhance impact. Hope this helps, Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-For um.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ > Please can I ask from the list if you know of any studies along these > lines? > > Many thanks for you help. > > Alejandro > > ___________________________________ > Alejandro Chiner, Service Innovation Officer, > University of Warwick Library Research & Innovation Unit, > Gibbet Hill Road, Coventry CV4 7AL, United Kingdom. Tel: +(44/0) 24 765 > 23251, Fax: +(44/0) 24 765 24211, > a.chiner-arias -- warwick.ac.uk http://www.warwick.ac.uk/go/riu > ___________________________________ > > From harnad at ECS.SOTON.AC.UK Fri Nov 23 08:39:50 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 23 Nov 2007 13:39:50 +0000 Subject: University Institutional Repository impact on citation of journal articles In-Reply-To: <6C724FA6DC66CF4C971F385DBDE88722949CCF@ELDER.ads.warwick.ac.uk> Message-ID: On Fri, 23 Nov 2007, Chiner Arias, Alejandro wrote: > My interest is on any study that specifically concentrates on > "Institutional Repository" self-archiving. It could be an added bonus > if the study is making a distinction between pre-prints and post-prints, > publisher's proof copy or publisher's published, or between immediate > OA and embargo with metadata exposure only. > > For the purpose of IR advocacy, studies on "Open Access" advantage do > little to persuade those who are already using central repositories > outside the institution. Ah, now I understand. Such a study is possible, if done by hand (harvesting OA articles by robot, then hand-sifting them into (1) IR, (2) CR, and (3) ordinary website content, and then comparing the citation counts of each with the citation counts of matched non-OA articles in the same journal issue). I do not believe, however, that it would be worth the effort such a study would entail. It is unlikely that the existence or size of the OA advantage will co-vary much with the form of deposit. More important, even it does, it is extremely unlikely that that would be because of something *intrinsic* about the kind of repository: It would simply reflect the accidental historic OA content situation today, with most articles (85%) not yet being made OA by their authors in *any* of the three ways, but with one CR (Arxiv) having the (incidental) advantage of a historic (15-year) head-start in self-archiving in its subject-matter (physics), and one other CR (PubMed Central) having the (incidental) advantage of being coupled with a widely-used subject-specific *non-OA* index (PubMed) that covers all and only its subject matter (biomedicine). It is quite possible that articles self-archived in those two CRs (and *only* those two CRs, as there are no other such special cases) will have somewhat higher citation counts than articles self-archived in IRs or on ordinary websites today, because those two CRs (Arxiv and PubMed Central) have strong direct user traffic of their own, in one case because it is a CR of very long standing with a much larger than baseline share of the OA content (Arxiv) and in the other because it is associated with a particularly strong and heavily used host (PubMed Central and PubMed), whereas the distributed IR harvesters (OAIster -- as well as google and google scholar) are all still struggling with very low-percentage OA content overall (15%), mixed across all subjects. The two high-percentage CRs, being restricted to a well-stocked subject area, currently have the advantage that you can not *only* search in them for a specific item or author, known in advance -- in that capability they have no advantage at all over IRs or websites -- but you can also search for keywords within a subject area, where those two CRs do not have the liability of bringing in a lot irrelevant noise, or drawing a near-blank, as searching over all IRs or websites with OAIster of google does -- *today*. The emphasis is on *today*, because it should be obvious that the advantage of Arxiv is not its centrality but the fact that it hosts most of the relatively high percentage of OA content in its subject. PubMed Central does not yet have much OA content yet, but it has the advantage of being associated with PubMed (which has *all* the content of its subject, mostly non-OA), and restricts search to that content alone. So there are two completely independent issues here: (1) percentage OA content and (3) subject-specific search. I think it is obvious that OA content is the decisive factor. For if the content in all subjects were already 100% OA *and* in IRs, it would be a relatively simple matter to optimize OAIster and google-scholar, the OA IR harvesters, to search over all and only a given subject subject matter -- especially with the help of the IRs' OAI metadata tags, which include the department of the author (and sometimes even, unnecessarily, subject-descriptor tags). In other words, restricting content by subject is a minor, harvesting/tagging issue, not a deposit-locus issue, whereas generating OA content in the first place is the major problem (and major obstacle to access, usage and citations). But from what I've said so far, it still sounds as if it makes no difference whether the OA content is deposited in a CR, an IR or a website, just as long as it's OA, and there for the harvesting. This focus on the locus of the article misses the most fundamental point, which is the *source* of the article: For all articles have *authors* and (just about) all authors have institutions. So it is 85% of authors who are not yet self-archiving, but (just about) every one of those authors also has an institution that is likewise losing a good deal from the fact that they are not self-archiving and -- most important -- is in a position to *mandate* that their institutional authors self-archive in their own institutional repository, in a position to monitor that they do so, and in a position to reward compliance with the usual rewards for enhanced research impact. Institutions tile all of research output space. CRs do not; CRs make much more sense as harvesters. Moreover, neither CRs nor "subjects" (disciplines) are entities -- the way their authors' institutional employers are -- with any means to require or reward self-archiving. CRs rely entirely on authors' spontaneous inclination to self-archive -- and that, apart from the prominent but lonely exception of (certain parts of) physics, keeps hovering at 15%. Self-archiving mandates are at long last starting to be adopted, by both institutions and funders, but here too we have to be careful to think through the strategic question of the *locus* that the mandate should dictate for the self-archiving. It's fairly obvious that it makes no sense for institutions to mandate that their own authors deposit in CRs rather than in their own IRs: Apart from the fact that CRs do not yet exist in most fields, institutions are not in a position to monitor compliance for all possible CRs, nor do they stand to benefit nearly as much, in terms of institutional visibility and record-keeping, if they mandate CR deposit willy-nilly rather than local deposit in their own IRs. From an institutional point of view, a local IR deposit mandate makes most sense, leaving CRs to be the harvesters they ought to be, rather than the loci of deposit. What about the funder's standpoint? The biggest funder mandate momentum today is in biomedicine, inspired by Harold Varmus and PubMed Central, so most of the biomedical funder mandates have stipulated depositing in PubMed Central. This will soon substantially increase the content of PubMed Central, and the percentage OA in biomedicine. But there are several problems: (1) Not all biomedical research is funded by the mandating funders and, (2) not all research is funded at all, and, most important, (3) not all research is biomedical. Unlike institutions, funders in general (and biomedical research funders in particular) *do not tile all of research space*. Nor are they one single entity. Moreover, all the advantages -- for funders -- that accrue from mandating OA would remain if funders mandated each researcher's own *IR* as the locus of the deposit! That way funding mandates could reinforce institutional mandates, helping to tile all of research output space; and, if the funders desire, the content can be harvested into designated CRs as they see fit. The web is not a central locus, it is a distributed network of local websites. Nor is Google a central locus: it is a harvester of distributed content. This distributed-content/central-harvesting+search principle seems to have evolved naturally on the web. There is every reason for OA IRs and CRs to build upon it, rather than unimaginatively regressing to a time when central benefits could only be had if content had a central locus. So this is all a lengthy way of explaining why any incidental citation count advantages that some CRs might enjoy over IRs today would not at all mean what we might naively be tempted to interpret them to mean: that it is better to deposit in a CR than an IR. Rather, they mean that one CR (Arxiv) happens to have a 15-year head start and another (PubMed Central) may soon have a funder kick-start -- but any resultant citation advantages of CRs are just search advantages that are just as possible with distributed archiving, harvesting, and adapted search engines (e.g. citebase), and not at all intrinsic to deposit locus. The optimal locus for deposit is the IR. Both institutions and funders should mandate IR deposit. And CRs should be harvesters, not primary deposit loci. Coda: The query was about whether the OA citation advantage is greater for articles in OA CRs, OA IRs or OA websites, but one might just as well have asked about articles in OA journals! Eysenbach found that the OA citation advantage was greater for PNAS articles archived on the PNAS journal website than for those self-archived in the author's IR or website. This too is merely an artifact of the fact that so little content on the web is OA today, whereas the PNAS website is a high-profile locus for direct visits and local search. But this local advantage would vanish if all articles were OA (somewhere on the Web), as then it would make as little sense to seek an article by directly visiting the PNAS website as by directly visiting any particular IR: Search and harvesting is a global matter, over distributed content; only deposit is a local matter. And the optimal locus for OA content, the one that scales to all of research space, is each author's own OAI-compliant IR. > -----Original Message----- > From: Stevan Harnad > Sent: 22 November 2007 21:16 > To: SIGMETRICS (ASIS&T Special Interest Group on Metrics)-- > listserv.utk.edu > Subject: Re:University Institutional Repository impact on citation of > journal articles > > On Tue, 20 Nov 2007, Chiner Arias, Alejandro wrote: > >> Does article self-archiving in an Institutional Repository increase >> citation of the articles that are later published in peer-reviewed >> scholarly journals? > > Yes: > > Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage > Statistics > as Predictors of Later Citation Impact. Journal of the American > Association for Information Science and Technology (JASIST), 57 > (8). pp. 1060-1072. http://eprints.ecs.soton.ac.uk/10713/ > > See also the work of Kurtz et al, and of Moed, on the "Early > Advantage," > in the OpCit Bibliography that you cite below. Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From harnad at ECS.SOTON.AC.UK Fri Nov 23 11:06:27 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Fri, 23 Nov 2007 16:06:27 +0000 Subject: UK Research Evaluation Framework: Validate Metrics Against Panel Rankings In-Reply-To: <803355070-1195819062-cardhu_decombobulator_blackberry.rim.net-1336798552-@bxe012.bisx.produk.on.blackberry> Message-ID: I think in their spirited responses to my posting, my computer-science colleagues have been addressing a number of separate questions as if they were all one question: (1) Does the conversion from panel-based RAE to metric RAE cost more money, time and effort than the current RAE system? Answer: No. Definitely much less. (2) Could RAE's needs be served by simply harvesting the content that is already on the web (whether in IRs or on any arbitrary website? Answer: Definitely not. Most of the target content is not on the web at all yet. (3) Is the purpose of the RAE to facilitate web search today? Answer: No. The purpose is to assess and rank UK research output. (4) Is the purpose of IRs to facilitate web search today: Answer: No. Their purpose is to generate web content and to display and audit institutional research output. (5) Is the purpose of metrics to facilitate web search today? Answer: No. Their purpose is to make the RAE less costly and cumbersome, and perhaps fairer and more accurate. (6) Is the problem of unique person identification on the web an RAE/IR/metric issue? Answer: No, but IRs accommodating RAE metric requirements could help solve it. Now, on to specific answers. First the excerpt the triggered the tumult: Stevan Harnad (Southampton): [excerpt from http://openaccess.eprints.org/index.php?/archives/333-guid.html] "...[I]t is important -- indeed imperative -- that all University Institutional Repositories (IRs) now get serious about systematically archiving all their research output assets (especially publications) so they can be counted and assessed (as well as accessed!), along with their IR metrics (downloads, links, growth/decay rates, harvested citation counts, etc.)." > Nigel Smart (Bristol): Yeah, lets reinvent the wheel and spend > loads of tax payers money building a system which already exists. > Has anyone heard of Google Scholar ? Perhaps it would be easier > for UUK to license the software off Google ? Is the system that already exists the one that is going to do the UK's Research Assessment Exercise in place of the present one? Is Google Scholar that system? Are all the publications of all UK researchers -- and all the publications that cite them -- in Google Scholar today? No? Then maybe it would be a good idea if the assessment requirements of RAE metrics required universities to require their researchers to deposit all their publications in their IRs. That might even encourage everyone else to do it too. Then Google Scholar would have all it needs to do the rest -- for citations. (The other metrics will require more input data, and usage states.) > Yorick Wilks (Sheffield): Correct point, and please note the > connection to my point on person- ambiguity: readers should ask > themselves how many pages of Google Scholar they have to go > down to find SOMEONE ELSE'S papers! Computer scientists are more conscientious than most in self-archiving their publications *somewhere* on the web, *somehow*. But not all (perhaps not even most) computer scientists are self-archiving yet, and most other disciplines are even further behind. So Google Scholar (and the web) are the wrong place to go today if you want to find most papers. The idea is to change that. (And person-ambiguity is a problem, but certainly not the main problem: absence of the target content is the main problem.) Institutions have the advantage that they can mandate a systematic self-archiving policy for all their researchers. And the RAE -- especially the metric RAE -- has the advantage that it gives institutions a strong motivation to do it. And OAI-compliant, RAE-metrics-compliant IRs will help provide the disambiguating tags too. Then you *will* be able to find everyone's papers via Google Scholar (etc.). > Hamish Cunningham (Sheffield): I missed the preceding posts so > sorry if I'm out of context, but the person ambiguity thing > that Yorick refers to is key, and Google Scholar doesn't solve > it. In experiments we've run here on various ways to harvest > accurate bibliographies by far the best performance is from > institutional pages, and increasing the quality and quantity > of these would be a great help. Note the huge amount of work > that's been done collating RAE lists - if these were all in our > databases already... No wheels need inventing, as the software > for institutional repositories is available already. Missing the preceding posts seems to have been an advantage. The foregoing comment was spot-on! > Ralph Martin (Cardiff): It wouldn't be hard for us personally > to identify papers in Google Scholar, and claim "this is me". Each > person could then send in links pointing to GS for their 4 most > cited papers (or whatever other number was desired), together > with GS's citation counts on a certain date for said papers. > A fairly trivial piece of software could then analyse these > numbers however thought fit (together with spot checks on the > claims if they don't trust us). Yes, that would all work splendidly -- if all UK research output were already on the web, hence harvested by Google Scholar. Alas, it is not. And that's the problem. > RM: Yes, more complex metrics might > be more accurate, but they would cost an awful lot more. Cost more than what? The current profligate, panel-based RAE? > RM: Yes, > adding more factors might improve the results, but pattern > classifiers can also degrade if too many indicators are used. The idea is not to overconstrain the metric equation a-priori but to *validate* it (rather than simply cherry-pick a few metrics a-priori). So a rich-diverse battery should first be tested, discipline by discipline, by regressing it against the parallel panel rankings for each discipline, to initialize the beta weights on each metric. Some may well turn out to be zero or near zero in some disciplines, so they may elect to drop them. But the cure for overconstraining data is not to make arbitrary a-priori choices when it is unnecessary. Once initialised, the weights can be calibrated and optimized. > RM: Yes, odd people might have anomalous citation counts - but we > are not using these here to judge individuals, rather averaging > over a whole department, or university even. And that is what the multiple regression does. > RM: Like Nigel, I am > really disappointed that many people want to make the process > much more complex and waste so much public money on this - far > more than will ever be saved by redirecting marginal money more > accurately. Who is proposing to make the RAE more complicated, wasteful and expense than it is? The metric proposal is in order to achieve the exact opposite! > Nigel Smart (Bristol): A rather cool (read addictive) thing we > did was download "Publish or Perish", which uses Google Scholar > I think. Play the following game... Rank your colleagues in > order of what you think they should be in terms of brilliance. > Then determine their H-index (or whatever) from the tool. > Compare the two rankings. To my amazement the two are amazingly > close. On the other hand we are quite well served by Google > in our dept. Try typing the keyword "nigel" into Google. You > get me as the third most important nigel in the whole world. > How sad is that ? The H-index is an a-priori weighted formula. It may or may not be optimal. Intuitive personal ranking of a few colleagues is not the way to test this metric, or any other: Multiple regression of a full complement of candidate metrics against the peer panel rankings over a pandisciplinary database of the scale of the UK RAE is. > Geraint A. Wiggins (Goldsmiths): For anyone who cares, I've > written a little ditty in php that queries Google Scholar for > the first 100 hits it finds and then tots up the citations. If > you have a cruel and unusual name like mine, it's accurate; if > your name is "John Smith", then it'll over- count for obvious > reasons. Fill in your first name (not initials) and surnames > in the obvious places in the URL: > http://www.doc.gold.ac.uk/~mas02gw/cite.php?First=****&Sur=**** It > would be trivial to make this focus on CS if anyone wants it - > let me know. It currently doesn't do that because I publish in > music and psychology too. Interestingly, sometimes Google > produces different numbers on successive queries - I've not had > time to try to understand why. But the second shot (ie if you > refresh after the first time you query) seems to be consistent. In vain would it be focussed on Google Scholar in CS or any other discipline if the target content is not yet there. > Emanuele Trucco (Dundee): Optimising existing processes instead > of just throwing money at starting from scratch is something > desperately needed - and not only in this case, but as a mental > framework to teach in schools. Any fool can start new things, > but the really needed part is taking them to successful completion > (forgive me for not remembering the paternity of this quote). Who is starting from scratch or throwing money? The RAE is already there, and has been incomparably more expensive and time-consuming in the form of panel submissions and review. The metric RAE saves most of that time and money. Moreover, IRs cost a pittance per university, and depositing costs only a few keystrokes. So what is the financial fuss about? > Awais Rashid (Lancaster):There is also the Publish or Perish > tool that provides interesting data and statistics in the same > vein: But it suffers from the same underlying problem: The absence of the target content. > Dave Cliff (Bristol): One problem with google scholar (and hence > also PublishOrPerish) is that it extracts author names from > pdf/ps files but is not clever enough to understand the ligature > symbols that latex substitutes in for certain pairs of letters. > So there are a whole bunch of citations to my papers that appear > to be due to some bloke called "Dave Cli" because of the ff > ligature. Luckily PublishOrPerish lets you do conjunctive > searches, but you have to know about this problem beforehand > to be able to know what other names to add as OR terms in the > search. Other than that, PublishOrPerish is a very cool interface > to google scholar (www.harzing.com). No matter how clever a harvester or search engine, it cannot harvest or search on what is not there. Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From felix at UGR.ES Sat Nov 24 13:04:22 2007 From: felix at UGR.ES (=?iso-8859-1?Q?F=E9lix_de_Moya_Aneg=F3n?=) Date: Sat, 24 Nov 2007 19:04:22 +0100 Subject: SJR Portal Message-ID: Dear collegue, We are very glad to announce the launch of SJR (SCImago Journal & Country Rank) portal. SJR portal is based on the Scopus? data and include the SCImago Journal Rank Indicator. This portal makes rankings by subject areas or subject categories showing the visibility of journals and countries through scientific indicators like SJR, H-index, Total docs., Total refs., Total cites, Citable docs., Cites per docs., Self-citation, etc., since 1996. These indicators have been calculated from the information exported from the Scopus? database on March 2007 and will be updated periodically. For this reason some of the figures showed in the SJR portal and Scopus? may be not match. The coverage period at this moment of country and journal indicators is from 1996 to 2006. The platform is freely available at: http://www.scimagojr.com Please, any comments or suggestions will be welcomed. Best wishes ******************************* F?lix de Moya Aneg?n http://www.ugr.es/~felix/ Grupo SCIMAGO http://www.scimago.es http://www.atlasofscience.net Universidad de Granada ******************************* From harnad at ECS.SOTON.AC.UK Sat Nov 24 18:49:01 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sat, 24 Nov 2007 23:49:01 +0000 Subject: Victory for Labour, Research Metrics and Open Access in Australia Message-ID: ---------- Forwarded message ---------- Date: Sun, 25 Nov 2007 10:09:27 +1100 From: Arthur Sale To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG Subject: Australia votes Yesterday, Australia held a Federal Election. The Australian Labor Party (the previous opposition) have clearly won, with Kevin Rudd becoming the Prime-Minister-elect. What has this to do with the [American Scientist Open Access Forum]?? Well the policy of the ALP is that the plans for the Research Quality Framework (the RQF - our research assessment exercise) will be immediately scrapped, and it will be replaced by a cheaper and metrics-based assessment, presumably a year or two later. At first sight this is a setback for open access in Australia, because institutional repositories are not essential for a metrics-based research assessment. They just help improve the metrics. However, the situation may be turned to advantage, and there are several major pluses. (1) Previous RQF grants should have ensured that every university in Australia now has a repository. Just mostly empty, or mostly dark, or both. (2) The advisers in the Department of Education, Science & Technology (DEST) haven't changed. The Accessibility Framework (ie open access) is still in place as a goal. (3) A new metric-based evaluation could and should be steered to be a multi-metric based one. The ALP has already stated that it will be discipline-dependent. (4) If the Rudd government is serious about efficiency in higher education, they could simply instruct DEST to require universities to put all their currently reported publications in a repository (ID/OA policy), from which the annual reports would be automatically derived. In addition all the desired publication metrics would also be derived, at any time. The Accessibility Framework would be achieved. It should now be crystal clear to every university in Australia that citations and other measures will be key in the future. It should be equally clear that they should do everything possible to increase their performance on these measures. Any university that fails to immediately implement an ID/OA mandate (Immediate Deposit, Open Access when possible) in its institutional repository is simply deciding to opt out of research competition, or mistakenly thinks that it knows better. Although I suppose there is still the weak excuse that it is all too hard to understand or think about. Here is the edited text of a press release by the shadow minister before the election. The boldface over some paragraphs is mine. Arthur Sale Professor of Computer Science University of Tasmania [BEGINS] Senator Kim Carr Labor Senator for Victoria Shadow Minister for Industry, Innovation, Science and Research Thursday, 15 November 2007 (58/07) Building a strong future for Australian research Federal Labor's key research initiatives, announced during yesterday's Campaign Launch, highlight our commitment to a research revolution. [snip] A Rudd Labor Government will be committed to rebuilding the national innovation system and, over time, doubling the amount invested in R&D in Australia. * Labor will bring responsibility for innovation, industry, science and research into a single Commonwealth Department. * Labor will develop a set of national innovation priorities to sit over the national research priorities. Together, these will provide a framework for a national innovation system, ensuring that the objectives of research programs and other innovation initiatives are complementary. * Labor will abolish the Howard Government's flawed Research Quality Framework, and replace it with a new, streamlined, transparent, internationally verifiable system of research quality assessment, based on quality measures appropriate to each discipline. These measures will be developed in close consultation with the research community. Labor will also address the inadequacies in current and proposed models of research citation. Labor's model will recognise the contribution of Australian researchers to Australia and the world. [snip] * Labor recognises the importance of basic research in the creation of new knowledge, and also the value and breadth of Australian research effort across the humanities, creative arts and social sciences as well as scientific and technological disciplines. The Howard Government has allocated $87 million for the implementation of the RQF. Labor will seek to redirect the residual funds to encourage genuine industry collaboration in research. [snip] From agrimwade at HISTCITE.COM Tue Nov 27 11:57:10 2007 From: agrimwade at HISTCITE.COM (Alexander Grimwade) Date: Tue, 27 Nov 2007 11:57:10 -0500 Subject: HistCite software is now available Message-ID: HistCite, a new software package developed by Eugene Garfield, is now available. You can download a fully functional, 30-day free trial version of the software from http://www.histcite.com. HistCite is designed to help research professionals make better use of their literature searches. HistCite lets you organize and edit the results of searches and create analyses to give unique views of the structure, history, and relationships within a data collection. Most importantly, HistCite can create historiographs showing the key papers and timeline of a research field. Please feel free to send any comments or questions to support at histcite.com. --------------------------- Alexander M Grimwade Ph. D. HISTCITE SOFTWARE LLC P. O. Box 2423 Bala-Cynwyd PA 19004 USA agrimwade at histcite.com (484) 270 8471 www.histcite.com From andrea.scharnhorst at VKS.KNAW.NL Wed Nov 28 08:17:06 2007 From: andrea.scharnhorst at VKS.KNAW.NL (Andrea Scharnhorst) Date: Wed, 28 Nov 2007 14:17:06 +0100 Subject: Workshop on Evolution and Physics, Germany, January 2008 Message-ID: Announcement of an interdisciplinary workshop on "Evolution and Physics" and call for participation (DEADLINE SOON!) Invitation to participate at the 401. Wilhelm and Else Heraeus Seminar Evolution and Physics - Concepts, Models and Applications Place: Physikzentrum Bad Honnef, 21.01.2008 - 23.01.2008 Evolutionary thinking is from the very beginning a constitutive part of physics. This workshop brings together different traditions of evolutionary thinking in physics and beyond. It shows already explored and not yet explored application areas for this specific view on evolution, complex natural and social systems, including information systems. Speakers: Peter Allen, Marcel Ausloos, Rob Axtell, David Blaschke, Katy B?rner, Stefan Bornholdt, Christian Van den Broeck, S. Cebrat, Werner Ebeling, Rainer Feistel, Piotr Fronczak, Charles van den Heuvel, Janusz Ho?yst, Renaud Lambiotte, Loet Leydesdorff, Thorsten P?schel, Antonio Politi, Araceli Proto, Paolo Saviotti, Andrea Scharnhorst, Johannes J. Schneider, Lutz Schimansky-Geier, Peter Schuster, Frank Schweitzer, Gerald Silverberg, D. Stauffer, D. R??, Mike Thelwall, G?rard Weisbuch If you want to participate in the workshop please send a title and short abstract for a poster (approx. 200 words)(including your contact details) to Andrea Scharnhorst [andrea.scharnhorst AT vks.knaw.nl]. The deadline for applications is December 4, 2007. Please, note that we have only a limited number of places. The fee for the workshop (including accommodation and meals) is 150 EUR. For more information please consult: http://www.virtualknowledgestudio.nl/staff/andrea-scharnhorst/heraeus.php or contact Dr. Scharnhorst. Sincerely yours Andrea Dr. Andrea Scharnhorst The Virtual Knowledge Studio for the Humanities and Social Sciences Royal Netherlands Academy of Arts and Sciences (KNAW) address: Cruquiusweg 31, 1019 AT Amsterdam, The Netherlands office: + 31 20 850 0276 fax: +31 20 850 0271 e: andrea.scharnhorst at vks.knaw.nl ** This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 15:19:28 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 15:19:28 -0500 Subject: Thylor B, Wylie E, Dempster M, et al., Systematically retrieving research: A case study evaluating seven databases, RESEARCH ON SOCIAL WORK PRACTICE 17 (6): 697-706 NOV 2007 Message-ID: Email Addresses: bj.taylor at ulster.ac.uk Title: Systematically retrieving research: A case study evaluating seven databases (Article, English) Author (s): Thylor, B; Wylie, E; Dempster, M; Donnelly, M Source: RESEARCH ON SOCIAL WORK PRACTICE 17 (6). NOV 2007. p.697-706 SAGE PUBLICATIONS INC, THOUSAND OAKS Document Type: Article Language: English Cited References: 40 Times Cited: 0 ABSTRACT: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people. Results: Highest sensitivity was achieved by Medline (52%), Social Sciences Citation Index (46%) and Cumulative Index of Nursing and Allied Health Literature (CINAHL) (30%). Highest precision was achieved by AgeInfo (76%), PsycInfo (51 %) and Social Services Abstracts (41 %). Each database retrieved unique relevant articles. Conclusions: Comprehensive searching requires the development of information management skills. The social work profession would benefit from having a dedicated international database with the capability and facilities of major databases such as Medline, CINAHL, and PsycInfo. Addresses: Univ Ulster, Dept Social Work, Newtownabbey BT37 0QB, North Ireland; Queens Univ Belfast, Belfast BT7 1NN, Antrim, North Ireland; Causeway Hlth & Social Serv Trust, Ballymoney, North Ireland Email Addresses: bj.taylor at ulster.ac.uk Publisher: SAGE PUBLICATIONS INC., 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA Subject Category: Social Work ISSN: 1049-7315 Cited References: ADAMS CE, 1994, PSYCHOL MED, V24, P741 AVENELL A, 2001, AM J CLIN NUTR, V73, P505 BOOTH A, 2000, B MED LIBR ASSOC, V88, P239 BRETTLE AJ, 2001, B MED LIBR ASSOC, V89, P353 CHALMERS I, 1994, SYSTEMATIC REV CLARKE M, 1998, JAMA-J AM MED ASSOC, V280, P280 COOK DJ, 1997, ANN INTERN MED, V126, P376 COUNSELL C, 1997, ANN INTERN MED, V127, P380 DEMPSTER M, 2003, A Z SOCIAL RES DICKERSIN K, 1994, BRIT MED J, V309, P1286 EGGER M, 2003, HEALTH TECHNOL ASSES, V7, P1 FISHER M, 2006, KNOWLEDGE WORKS SOCI GAMBRILL E, 2006, RES SOCIAL WORK PRAC, V16, P338 GLASS GV, 1976, EDUC RES, V5, P3 GLASS GV, 1981, META ANAL SOCIAL RES GREENHALGH T, 2005, BRIT MED J, V331, P1064 HAY PJ, 1996, HLTH LIB REV, V13, P91 HAYNES RB, 2005, BRIT MED J, V330, P1179 HIGGINS K, 1998, MAKING RES WORK PROM HOPEWELL S, 2002, STAT MED, V21, P1625 HUNT M, 1997, STORY META ANAL LIGHT RJ, 1971, HARVARD EDUC REV, V41, P429 MACDONALD G, 2001, EFFECTIVE INTERVENTI MATTHEWS EJ, 1999, HLTH LIB REV, V16, P112 MEADE MO, 1997, ANN INTERN MED, V127, P531 PETROSINO A, 2000, EVALUATION RES ED, V14, P206 POLLIO DE, 2006, RES SOCIAL WORK PRAC, V16, P224 POPAY J, 2003, 3 SCIE REID WJ, 1997, SOC SERV REV, V71, P200 SANDELOWSKI M, 1997, RES NURS HEALTH, V20, P365 SCHUERMAN J, 2002, RES SOCIAL WORK PRAC, V12, P309 SMITH ML, 1980, EVALUATION ED INT RE, V4, P22 SNOWBALL R, 1997, HLTH LIB REV, V14, P167 STEVENSON HW, 2000, J PSYCHOL CHINESE SO, V1, P1 STEVINSON C, 2004, COMPLEMENT THER MED, V12, P228 TAYLOR BJ, 2003, A Z SOCIAL RES TAYLOR BJ, 2003, BRIT J SOC WORK, V33, P423 TAYLOR BJ, 2006, EVALUATION AGEINFO D TAYLOR BJ, 2007, BRIT J SOC WORK, V37, P335 WHITE VJ, 2001, J INFORM SCI, V27, P357 From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 15:22:09 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 15:22:09 -0500 Subject: White HD, Griffith BC "Author Cocitation- a literature measure of intellectual structure" JASIST 32(3): 163-171, 1981 Message-ID: E-Mail: whitehd at drexel.edu THE AUTHOR HAS PERMITTED ACCESS TO FULL TEXT OF THIS ARTICLE AT: http://garfield.library.upenn.edu/hwhite/whitejasist1981.pdf TITLE :AUTHOR COCITATION - A LITERATURE MEASURE OF INTELLECTUAL STRUCTURE Author(s): WHITE HD, GRIFFITH BC Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE Volume: 32 Issue: 3 Pages: 163-171 Published: 1981 Times Cited: 151 References: 10 Document Type: Article Language: English Addresses: WHITE, HD (reprint author), DREXEL UNIV, SCH LIB & INFORMAT SCI, PHILADELPHIA, PA 19104 USA Publisher: JOHN WILEY & SONS INC, 605 THIRD AVE, NEW YORK, NY 10158-0012 IDS Number: LM437 ISSN: 0002-8231 From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 15:25:40 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 15:25:40 -0500 Subject: White HD, McCain KW, "Visualizing a discipline: An author co-citation analysis of information science, 1972-1995" JASIST 49(4):327-355, April 1998 Message-ID: E-Mail: whitehd at drexel.edu kate.mccain at cis.drexel.edu THE AUTHOR HAS PERMITTED ACCESS TO FULL TEXT OF THIS ARTICLE AT: http://garfield.library.upenn.edu/hwhite/whitejasist1998.pdf Title: Visualizing a discipline: An author co-citation analysis of information science, 1972-1995 Author(s): White HD, McCain KW Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 49(4): 327-355, April 1998 Times Cited: 135 References: 59 Abstract: This study presents an extensive domain analysis of a discipline- information science-in terms of its authors. Names of those most frequently cited in 12 key journals from 1972 through 1995 were retrieved from Social Scisearch via DIALOG. The top 120 were submitted to author co-citation analyses, yielding automatic classifications relevant to histories of the field. Tables and graphics reveal: (1) The disciplinary and institutional affiliations of contributors to information science; (2) the specialty structure of the discipline over 24 years; (3) authors' memberships in 1 or more specialties; (4) inertia and change in authors' positions on e- dimensional subject maps over 3 8-year subperiods, 1972-1979, 1980-1987, 1988-1995; (5) the 2 major subdisciplines of information science and their evolving memberships; (6) "canonical" authors who are in the top 100 in all three subperiods; (7) changes in authors' eminence and influence over the subperiods, as shown by mean co-citation counts; (8) authors with marked changes in their mapped positions over the subperiods; (9) the axes on which authors are mapped, with interpretations; (10)evidence of a paradigm shift in information science in the 1980s; and (11)evidence on the general nature and state of integration of information science. Statistical routines include ALSCAL, INDSCAL, factor analysis, and cluster analysis with SPSS; maps and other graphics were made with DeltaGraph. Theory and methodology are sufficiently detailed to be usable by other researchers. Document Type: Article Language: English Addresses: White, HD (reprint author), Drexel Univ, Coll Informat Sci & Technol, 3141 Chestnut St, Philadelphia, PA 19104 USA Drexel Univ, Coll Informat Sci & Technol, Philadelphia, PA 19104 USA Publisher: JOHN WILEY & SONS INC, 605 THIRD AVE, NEW YORK, NY 10158-0012 USA IDS Number: ZB384 ISSN: 0002-8231 From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 15:29:37 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 15:29:37 -0500 Subject: White HD, Combining bibliometrics, information retrieval, and relevance theory, Part 1: First examples of a synthesis, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 (4): 536-559 FEB 15 2007 Message-ID: E-mail Address: whitehd at drexel.edu Author(s): White, HD (White, Howard D.) Title: Combining bibliometrics, information retrieval, and relevance theory, Part 1: First examples of a synthesis Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 58 (4): 536-559 FEB 15 2007 Language: English Document Type: Article Cited Reference Count: 77 Times Cited: 1 Abstract: In Sperber and Wilson's relevance theory (RT), the ratio Cognitive Effects/Processing Effort defines the relevance of a communication. The tf*idf formula from information retrieval is used to operationalize this ratio for any item co-occurring with a user-supplied seed term in bibliometric distributions. The tf weight of the item predicts its effect on the user in the context of the seed term, and its idf weight predicts the user's processing effort in relating the item to the seed term. The idf measure, also known as statistical specificity, is shown to have unsuspected applications in quantifying interrelated concepts such as topical and nontopical relevance, levels of user expertise, and levels of authority. A new kind of visualization, the pennant diagram, illustrates these claims. The bibliometric distributions visualized are the works cocited with a seed work (Moby Dick), the authors cocited with a seed author (White HD, for maximum interpretability), and the books and articles cocited with a seed article (S.A. Harter's "Psychological Relevance and Information Science," which introduced RT to information scientists in 1992). Pennant diagrams use bibliometric data and information retrieval techniques on the system side to mimic a relevance-theoretic model of cognition on the user side. Relevance theory may thus influence the design of new visual information retrieval interfaces. Generally, when information retrieval and bibliometrics are interpreted in light of RT, the implications are rich: A single sociocognitive theory may serve to integrate research on literature- based systems with research on their users, areas now largely separate. Addresses: Drexel Univ, Coll Informat Sci & Technol, Philadelphia, PA 19104 USA Reprint Address: White, HD, Drexel Univ, Coll Informat Sci & Technol, Philadelphia, PA 19104 USA. E-mail Address: whitehd at drexel.edu Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 US ISSN: 1532-2882 Subject Category: Computer Science, Information Systems; Information Science & Library Science Cited References: *RED ROCK SOFTW, 2005, DELT VERS 5 6 COMP S. BEAN CA, 2001, RELATIONSHIPS ORG KN, P115. BELEW RK, 2000, FINDING OUT COGNITIV. BLAIR DC, 1992, COMPUT J, V35, P200. BLAKEMORE D, 1992, UNDERSTANDING UTTERA. BORLUND P, 2003, J AM SOC INF SCI TEC, V54, P913. BRADFORD SC, 1950, DOCUMENTATION. BROOKES BC, 1973, LIBR TRENDS, V22, P18. BROOKES BC, 1980, CANADIAN J INFORMATI, V5, P199. BROOKES BC, 1980, J AM SOC INFORM SCI, V31, P248. BROWN G, 1983, DISCOURSE ANAL. BUCKLAND MK, 1969, J DOC, V25, P52. BUDD JM, 2004, LIBR TRENDS, V52, P447. CASE DO, 2005, THEORIES INFORM BEHA, P289. CHEN Z, 2005, P 38 HAW INT C SYST. COSIJN E, 2000, INFORM PROCESSING MA, V31, P191. COTTRILL CA, 1989, KNOWLEDGE, V11, P181. CRANE D, 1972, INVISIBLE COLL DIFFU. DEBEAUGRANDE R, 1981, INTRO TEXT LINGUISTI. ELLIS D, 1998, INFORM SERVICES USE, V18, P225. FURNER J, 2002, J AM SOC INF SCI TEC, V53, P747. GARFIELD E, 1979, CITATION INDEXING IT. GARFIELD E, 2003, J AM SOC INF SCI TEC, V54, P400. GOATLEY A, 1997, LANGUAGE METAPHORS. GREEN R, 1995, J AM SOC INFORM SCI, V46, P646. GREISDORF H, 2000, INFORMING SCI, V3, P67. GROSSMAN DA, 1998, INFORM RETRIEVAL ALG. HALLIDAY MAK, 1976, COHESION ENGLISH. HARDY AP, 1982, INFORM PROCESS MANAG, V18, P289. HARTER SP, 1992, J AM SOC INFORM SCI, V43, P602. HJORLAND B, 2000, J AM SOC INFORM SCI, V51, P209. JONES KS, 1997, READINGS INFORM RETR, P305. JURAFSKY D, 2000, SPEECH LANGUAGE PROC. KARKI R, 1996, J INFORM SCI, V22, P323. KOESTLER A, 1964, ACT CREATION STUDY C. MANN T, 1993, LIB RES MODELS GUIDE. MANNING CD, 2000, FDN STAT NATURAL LAN. MIZZARO S, 1997, J AM SOC INFORM SCI, V48, P810. MORRIS SA, 2003, J AM SOC INF SCI TEC, V54, P413. NELSON MJ, 1985, J AM SOC INFORM SCI, V36, P283. POOLE HL, 1985, THEORIES MIDDLE RANG. POSNER RA, 2001, PUBLIC INTELLECTUALS. ROBERTSON S, 2004, J DOC, V60, P503. RUTHVEN I, 1996, P 9 FLOR ART INT RES, P380. SARACEVIC T, 1975, J AM SOC INFORM SCI, V26, P321. SARACEVIC T, 1996, INFORMATION SCI INTE, P201. SARACEVIC T, 1997, SIGIR FORUM, V31, P16. SCHAMBER L, 1994, ANNU REV INFORM SCI, V29, P3. SMALL HG, 1978, SOC STUD SCI, V8, P327. SPARCKJONES K, 1972, J DOC, V28, P11. SPARCKJONES K, 2004, J DOC, V60, P521. SPERBER D, 1986, RELEVANCE COMMUNICAT. SPERBER D, 1995, RELEVANCE COMMUNICAT. SPERBER D, 1996, BEHAV BRAIN SCI, V19, P530. SWANSON DR, 1977, LIBRARY Q, V47, P128. WALTON DN, 1989, INFORMAL LOGIC HDB C. WHITE HD, 1981, J AM SOC INFORM SCI, V32, P163. WHITE HD, 1989, ANNU REV INFORM SCI, V24, P119. WHITE HD, 1994, HDB RES SYNTHESIS, P41. WHITE HD, 1997, ANNU REV INFORM SCI, V32, P99. WHITE HD, 2000, WEB KNOWLEDGE FESTSC, P475. WHITE HD, 2001, J AM SOC INF SCI TEC, V52, P87. WHITE HD, 2001, SCIENTOMETRICS, V51, P607. WHITE HD, 2002, CHI 2002 DISC ARCH W. WHITE HD, 2003, J AM SOC INF SCI TEC, V54, P423. WHITE HD, 2004, P NATL ACAD SCI U S1, V101, P5297. WHITE HD, 2004, SCIENTOMETRICS, V60, P93. WHITE HD, 2007, J AM SOC INFORM SCI, V56, P583. WILDEMUTH BM, 2004, J AM SOC INF SCI TEC, V55, P246. WILLIAMS ST, 1963, 8 AM AUTHORS REV RES, P207. WILSON D, 1986, ENCONTRO LINGUISTAS, P19. WILSON D, 1994, LANGUAGE UNDERSTANDI, P35. WILSON D, 2002, LINGUISTICS, V14, P249. WILSON P, 1968, 2 KINDS POWER ESSAY. YANG KD, 2005, ANNU REV INFORM SCI, V39, P33. YUS F, 1998, J PRAGMATICS, V30, P305. YUS F, 2006, RELEVANCE THEORY ONL. Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 US ISSN: 1532-2882 Subject Category: Computer Science, Information Systems; Information Science & Library Science From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 15:32:48 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 15:32:48 -0500 Subject: White HD, Combining bibliometrics, information retrieval, and relevance theory, Part 2: Some implications for information science, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 (4): 583-605 FEB 15 2007 Message-ID: E-mail Addresses: whitehd at drexel.edu Title: Combining bibliometrics, information retrieval, and relevance theory, Part 2: Some implications for information science Author(s): White HD (White, Howard D.) Source: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 58 (4): 583-605 FEB 15 2007 Document Type: Review Language: English Cited References: 110 Times Cited: 0 Abstract: When bibliometric data are converted to term frequency (tf) and inverse document frequency (idf) values, plotted as pennant diagrams, and interpreted according to Sperber and Wilson's relevance theory (RT), the results evoke major variables of information science (IS). These include topicality, in the sense of intercohesion and intercoherence among texts; cognitive effects of texts in response to people's questions; people's levels of expertise as a precondition for cognitive effects; processing effort as textual or other messages are received; specificity of terms as it affects processing effort; relevance, defined in RT as the effects/effort ratio; and authority of texts and their authors. While such concerns figure automatically in dialogues between people, they become problematic when people create or use or judge literature-based information systems. The difficulty of achieving worthwhile cognitive effects and acceptable processing effort in human-system dialogues explains why relevance is the central concern of IS. Moreover, since relevant communication with both systems and unfamiliar people is uncertain, speakers tend to seek cognitive effects that cost them the least effort. Yet hearers need greater effort, often greater specificity, from speakers if their responses are to be highly relevant in their turn. This theme of mismatch manifests itself in vague reference questions, underdeveloped online searches, uncreative judging in retrieval evaluation trials, and perfunctory indexing. Another effect of least effort is a bias toward topical relevance over other kinds. RT can explain these outcomes as well as more adaptive ones. Pennant diagrams, applied here to a literature search and a Bradford-style journal analysis, can model them. Given RT and the right context, bibliometrics may predict psychometrics. Addresses: White HD (reprint author), Drexel Univ, Coll Informat Sci & Technol, Philadelphia, PA 19104 USA Drexel Univ, Coll Informat Sci & Technol, Philadelphia, PA 19104 USA E-mail Addresses: whitehd at drexel.edu Publisher: JOHN WILEY & SONS INC, 111 RIVER ST, HOBOKEN, NJ 07030 USA Subject Category: Computer Science, Information Systems; Information Science & Library Science ISSN: 1532-2882 Cited References: BARRY CL, 1994, J AM SOC INFORM SCI, V45, P149. BARRY CL, 1998, INFORM PROCESS MANAG, V34, P219. BATEMAN J, 1998, P 61 ASIS ANN M MEDF, P23. BATES MJ, 1993, LIBR QUART, V63, P1. BATES MJ, 1998, J AM SOC INFORM SCI, V49, P1185. BATES MJ, 2002, EMERGING FRAMEWORKS, P137. BATESON G, 1972, STEPS ECOLOGY MIND. BELKIN NJ, 2005, THEORIES INFORM BEHA, P44. BLAIR DC, 1990, LANGUAGE REPRESENTAT. BLAIR DC, 1992, COMPUT J, V35, P200. BLAIR DC, 2003, ANNU REV INFORM SCI, V37, P3. BLAKEMORE D, 1992, UNDERSTANDING UTTERA. BORGMAN CL, 1989, INFORM PROCESS MANAG, V25, P237. BORKO H, 1968, AM DOC, V19, P3. BORLUND P, 2003, INFORM RES, V8. BORLUND P, 2003, J AM SOC INF SCI TEC, V54, P913. BOULDING K, 1956, IMAGE KNOWLEDGE LIFE. BRADFORD SC, 1934, ENGINEERING-LONDON, V137, P85. BROOKES BC, 1980, CANADIAN J INFORMATI, V5, P199. BROOKES BC, 1980, J AM SOC INFORM SCI, V31, P248. BROOKS BC, 1975, 530 FID VINITI, P115. BUCKLAND MK, 1991, J AM SOC INFORM SCI, V42, P351. CHEN Z, 2005, P 38 HAW INT C SYST. CORNWELL P, 2002, PORTRAIT KILLER JACK. COSIJN E, 2000, INFORM PROCESSING MA, V31, P191. CUADRA CA, 1967, J DOC, V23, P291. DRABENSTOTT KM, 1991, 9 RASD, P59. DRABENSTOTT KM, 1994, USING SUBJECT HEADIN. DRABENSTOTT KM, 2003, J AM SOC INF SCI TEC, V54, P836. EICHMAN TL, 1978, RQ, V17, P212. FAIRTHORNE RA, 1969, J DOC, V25, P319. FALLOWS J, 2005, NY TIMES 0612, BU3. FIEDLER LA, 1966, LOVE DEATH AM NOVEL. FRANZEN J, 1992, STRONG MOTION. FROEHLICH TJ, 1994, J AM SOC INFORM SCI, V45, P124. FURNER J, 2004, LIBR TRENDS, V52, P427. GOATLEY A, 1997, LANGUAGE METAPHORS. GREEN R, 1995, J AM SOC INFORM SCI, V46, P646. GRICE HP, 1975, SYNTAX SEMANTICS, V3, P41. HARDY AP, 1982, INFORM PROCESS MANAG, V18, P289. HARTER SP, 1992, J AM SOC INFORM SCI, V43, P602. HARTER SP, 1996, J AM SOC INFORM SCI, V47, P37. HAYKIN DJ, 1951, SUBJECT HEADINGS PRA. HIRSH SG, 2004, YOUTH INFORM SEEKING, P241. HJORLAND B, 1995, J AM SOC INFORM SCI, V46, P400. HOLSCHER C, 2000, P 9 INT WWW C. HORN LR, 1984, MEANING FORM USE CON, P11. INGWERSEN P, 1992, INFORM RETRIEVAL INT. JANES JW, 1994, J AM SOC INFORM SCI, V45, P160. JANSEN BJ, 2000, INFORM PROCESS MANAG, V36, P207. JANSEN BJ, 2000, INFORM RES, V6. JANSEN BJ, 2001, J AM SOC INF SCI TEC, V52, P235. JONES KS, 1981, INFORMATION RETRIEVA, P256. JONES KS, 1991, J AM SOC INFORM SCI, V42, P558. JONES KS, 1997, READINGS INFORM RETR, P305. KLINKENBORG V, 2003, NY TIMES 1112, A20. KOESTLER A, 1964, ACT CREATION STUDY C. LANCASTER FW, 1968, INFORM RETRIEVAL SYS. LEVINE MM, 1977, J AM SOC INFORM SCI, V28, P101. MACLEOD D, 2006, GUARDIAN UNLIMI 0326. MANN T, 1993, LIB RES MODELS GUIDE. MARON ME, 1982, P 5 ANN ACM C RES DE, P98. MAY KO, 1968, ISIS, V69, P363. MCCAIN KW, 1989, J AM SOC INFORM SCI, V40, P110. MCEWAN I, 1998, AMSTERDAM. MIZZARO S, 1997, J AM SOC INFORM SCI, V48, P810. MONTGOMERY C, 1962, AM DOC, V13, P359. OPPENHEIM C, 1997, J DOC, V53, P477. PAISLEY WJ, 1968, WE MAY THINK INFORM. PALMER CL, 1999, J AM SOC INFORM SCI, V50, P1139. PAO ML, 1989, J AM SOC INFORM SCI, V40, P226. POWERS R, 1991, GOLD BUG VARIATIONS. PRATT AD, 1982, INFORM IMAGE. RAMOS FY, 1998, J PRAGMATICS, V30, P305. REITZ JM, 2004, INFORM SCI DICT LIB. RUTHVEN I, 2003, KNOWL ENG REV, V18, P95. SALTON G, 1975, DYNAMIC INFORM LIB P. SARACEVIC T, 1975, J AM SOC INFORM SCI, V26, P321. SARACEVIC T, 1991, P ASIS ANNU MEET, V28, P82. SARACEVIC T, 1996, INFORMATION SCI INTE, P201. SCHAMBER L, 1990, INFORM PROCESS MANAG, V26, P755. SCHAMBER L, 1994, ANNU REV INFORM SCI, V29, P3. SCHANK R, 1997, HALS LEGACY, P171. SMITH A, 2002, CORRELATION RAE RATI. SPERBER D, 1986, RELEVANCE COMMUNICAT. SPERBER D, 1995, RELEVANCE COMMUNICAT. SPERBER D, 1996, BEHAV BRAIN SCI, V19, P530. SPINK A, 2001, J AM SOC INF SCI TEC, V52, P226. SWANSON DR, 1977, LIBRARY Q, V47, P128. SWANSON DR, 1986, LIBR QUART, V56, P389. SWANSON DR, 1997, ARTIF INTELL, V91, P183. SWANSON DR, 1999, LIBR TRENDS, V48, P48. TAGUESUTCLIFFE J, 1995, MEASURING INFORM INF. TAYLOR RS, 1968, COLL RES LIBR, V29, P178. WARNER J, 2003, B AM SOC INFORM INF, V30, P26. WHITE HD, 1981, ONLINE REV, V5, P47. WHITE HD, 1990, P 11 NAT ONL M, P453. WHITE HD, 1992, INFORMATION SPECIALI, P249. WHITE HD, 2002, CHI 2002 DISC ARCH W. WHITE HD, 2004, SCIENTOMETRICS, V60, P93. WHITE HD, 2005, P 10 INT C INT SOC S, V2, P442. WHITE HD, 2007, J AM SOC INF SCI TEC, V58, P536. WILSON D, 2002, U COLL LONDON WORKIN, V14, P249. WILSON P, 1968, 2 KINDS POWER ESSAY. WILSON P, 1973, INFORMATION STORAGE, V9, P457. WILSON P, 1978, DREXEL LIBRARY Q, V14, P10. WILSON P, 1986, RQ, V25, P468. WOLFRAM D, 2003, APPL INFORMETRICS IN. YANG KD, 2005, ANNU REV INFORM SCI, V39, P33. ZIPF GK, 1949, HUMAN BEHAV PRINCIPL. From Christina.Pikas at JHUAPL.EDU Wed Nov 28 16:10:08 2007 From: Christina.Pikas at JHUAPL.EDU (Pikas, Christina K.) Date: Wed, 28 Nov 2007 16:10:08 -0500 Subject: Thylor B, Wylie E, Dempster M, et al., Systematically retrieving research: A case study evaluating seven databases, RESEARCH ON SOCIAL WORK PRACTICE 17 (6): 697-706 NOV 2007 In-Reply-To: A Message-ID: Essentially they decided that they need a librarian, too ;) Christina -----Original Message----- From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Eugene Garfield Sent: Wednesday, November 28, 2007 3:19 PM To: SIGMETRICS at listserv.utk.edu Subject: [SIGMETRICS] Thylor B, Wylie E, Dempster M, et al., Systematically retrieving research: A case study evaluating seven databases, RESEARCH ON SOCIAL WORK PRACTICE 17 (6): 697-706 NOV 2007 Email Addresses: bj.taylor at ulster.ac.uk Title: Systematically retrieving research: A case study evaluating seven databases (Article, English) Author (s): Thylor, B; Wylie, E; Dempster, M; Donnelly, M Source: RESEARCH ON SOCIAL WORK PRACTICE 17 (6). NOV 2007. p.697-706 SAGE PUBLICATIONS INC, THOUSAND OAKS Document Type: Article Language: English Cited References: 40 Times Cited: 0 ABSTRACT: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people. Results: Highest sensitivity was achieved by Medline (52%), Social Sciences Citation Index (46%) and Cumulative Index of Nursing and Allied Health Literature (CINAHL) (30%). Highest precision was achieved by AgeInfo (76%), PsycInfo (51 %) and Social Services Abstracts (41 %). Each database retrieved unique relevant articles. Conclusions: Comprehensive searching requires the development of information management skills. The social work profession would benefit from having a dedicated international database with the capability and facilities of major databases such as Medline, CINAHL, and PsycInfo. Addresses: Univ Ulster, Dept Social Work, Newtownabbey BT37 0QB, North Ireland; Queens Univ Belfast, Belfast BT7 1NN, Antrim, North Ireland; Causeway Hlth & Social Serv Trust, Ballymoney, North Ireland Email Addresses: bj.taylor at ulster.ac.uk Publisher: SAGE PUBLICATIONS INC., 2455 TELLER RD, THOUSAND OAKS, CA 91320 USA Subject Category: Social Work ISSN: 1049-7315 Cited References: ADAMS CE, 1994, PSYCHOL MED, V24, P741 AVENELL A, 2001, AM J CLIN NUTR, V73, P505 BOOTH A, 2000, B MED LIBR ASSOC, V88, P239 BRETTLE AJ, 2001, B MED LIBR ASSOC, V89, P353 CHALMERS I, 1994, SYSTEMATIC REV CLARKE M, 1998, JAMA-J AM MED ASSOC, V280, P280 COOK DJ, 1997, ANN INTERN MED, V126, P376 COUNSELL C, 1997, ANN INTERN MED, V127, P380 DEMPSTER M, 2003, A Z SOCIAL RES DICKERSIN K, 1994, BRIT MED J, V309, P1286 EGGER M, 2003, HEALTH TECHNOL ASSES, V7, P1 FISHER M, 2006, KNOWLEDGE WORKS SOCI GAMBRILL E, 2006, RES SOCIAL WORK PRAC, V16, P338 GLASS GV, 1976, EDUC RES, V5, P3 GLASS GV, 1981, META ANAL SOCIAL RES GREENHALGH T, 2005, BRIT MED J, V331, P1064 HAY PJ, 1996, HLTH LIB REV, V13, P91 HAYNES RB, 2005, BRIT MED J, V330, P1179 HIGGINS K, 1998, MAKING RES WORK PROM HOPEWELL S, 2002, STAT MED, V21, P1625 HUNT M, 1997, STORY META ANAL LIGHT RJ, 1971, HARVARD EDUC REV, V41, P429 MACDONALD G, 2001, EFFECTIVE INTERVENTI MATTHEWS EJ, 1999, HLTH LIB REV, V16, P112 MEADE MO, 1997, ANN INTERN MED, V127, P531 PETROSINO A, 2000, EVALUATION RES ED, V14, P206 POLLIO DE, 2006, RES SOCIAL WORK PRAC, V16, P224 POPAY J, 2003, 3 SCIE REID WJ, 1997, SOC SERV REV, V71, P200 SANDELOWSKI M, 1997, RES NURS HEALTH, V20, P365 SCHUERMAN J, 2002, RES SOCIAL WORK PRAC, V12, P309 SMITH ML, 1980, EVALUATION ED INT RE, V4, P22 SNOWBALL R, 1997, HLTH LIB REV, V14, P167 STEVENSON HW, 2000, J PSYCHOL CHINESE SO, V1, P1 STEVINSON C, 2004, COMPLEMENT THER MED, V12, P228 TAYLOR BJ, 2003, A Z SOCIAL RES TAYLOR BJ, 2003, BRIT J SOC WORK, V33, P423 TAYLOR BJ, 2006, EVALUATION AGEINFO D TAYLOR BJ, 2007, BRIT J SOC WORK, V37, P335 WHITE VJ, 2001, J INFORM SCI, V27, P357 From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 16:11:47 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 16:11:47 -0500 Subject: Pfeiffer T, Hoffmann R "Temporal patterns of genes in scientific publications " Proceedings of the National Academy of Sciences of the United States of America 104(29): 12052-12056, July 17, 2007 Message-ID: E-mail Addresses: pfeiffer at fas.harvard.edu Title: Temporal patterns of genes in scientific publications Author(s): Pfeiffer T (Pfeiffer, Thomas), Hoffmann R (Hoffmann, Robert) Source: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 104 (29): 12052-12056 JUL 17 2007 Document Type: Article Language: English Cited References: 30 Times Cited: 0 Abstract: Publications in scientific journals contain a considerable fraction of our scientific knowledge. Analyzing data from publication databases helps us understand how this knowledge is obtained and how it changes over time. In this study, we present a mathematical model for the temporal dynamics of data on the scientific content of publications. Our data set consists of references to thousands of genes in the >15 million publications listed in PublMed. We show that the observed dynamics may result from a simple process: Researchers predominantly publish on genes that already appear in many publications. This might be a rewarding strategy for researchers, because there is a positive correlation between the frequency of a gene in scientific publications and the journal impact of the publications. By comparing the empirical data with model predictions, we are able to detect unusual publication patterns that often correspond to major achievements in the field. We identify interactions between yeast genes from PubMed and show that the frequency differences of genes in publications lead to a biased picture of the resulting interaction network. Addresses: Pfeiffer T (reprint author), Harvard Univ, Program Evolut Dynam, One Brattle Square, Cambridge, MA 02138 USA Harvard Univ, Program Evolut Dynam, Cambridge, MA 02138 USA MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA E-mail Addresses: pfeiffer at fas.harvard.edu Publisher: NATL ACAD SCIENCES, 2101 CONSTITUTION AVE NW, WASHINGTON, DC 20418 USA Subject Category: Multidisciplinary Sciences IDS Number: 192KA ISSN: 0027-8424 CITED REFERENCES: BARABASI AL Emergence of scaling in random networks SCIENCE 286 : 509 1999 BLACKMORE S MEME MACHINE : 1999 BLACKMORE S SCI AM 283 : 52 2000 BORNER K The simultaneous evolution of author and paper networks PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 101 : 5266 2004 BOYD R CULTURE EVOLUTIONARY : 1988 BOYD R Meme theory oversimplifies how culture changes SCIENTIFIC AMERICAN 283 : 70 2000 CAVALLISFORZA LL CULTURAL TRANSMISSIO : 1981 CHEN CM Searching for intellectual turning points: Progressive knowledge domain visualization PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 101 : 5303 2004 DAWKINS R SELFISH GENE : 1976 DUGATKIN LA Animals imitate, too SCIENTIFIC AMERICAN 283 : 67 2000 GARFIELD E CITATION ANALYSIS AS A TOOL IN JOURNAL EVALUATION - JOURNALS CAN BE RANKED BY FREQUENCY AND IMPACT OF CITATIONS FOR SCIENCE POLICY STUDIES SCIENCE 178 : 471 1972 GAVIN AC Functional organization of the yeast proteome by systematic analysis of protein complexes NATURE 415 : 141 2002 GUIMERA R Team assembly mechanisms determine collaboration network structure and team performance SCIENCE 308 : 697 2005 HO Y Systematic identification of protein complexes in Saccharomyces cerevisiae by mass spectrometry NATURE 415 : 180 2002 HOFFMANN R Implementing the iHOP concept for navigation of biomedical literature BIOINFORMATICS 21 : 252 2005 HOFFMANN R A gene network for navigating the literature NATURE GENETICS 36 : 664 2004 HOFFMANN R Life cycles of successful genes TRENDS IN GENETICS 19 : 79 2003 HOFFMANN R Protein interaction: same network, different hubs TRENDS IN GENETICS 19 : 681 2003 IOANNIDIS JPA Why most published research findings are false PLOS MEDICINE 2 : 696 2005 Art. No. e124 ITO T A comprehensive two-hybrid analysis to explore the yeast protein interactome PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 98 : 4569 2001 KUHN TS STRUCTURE SCI REVOLU : 1962 NEWMAN MEJ The structure of scientific collaboration networks PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 98 : 404 2001 PLOTKIN H People do more than imitate SCIENTIFIC AMERICAN 283 : 72 2000 POPPER K OBJECTIVE KNOWLEDGE : 1972 PRICE DJD NETWORKS OF SCIENTIFIC PAPERS SCIENCE 149 : 510 1965 REDNER S Citation statistics from 110 years of Physical Review PHYSICS TODAY 58 : 49 2005 REDNER S Aggregation kinetics of popularity PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS 306 : 402 2002 SIMKIN MV COMPLEX SYSTEMS 14 : 269 2003 UETZ P A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae NATURE 403 : 623 2000 VONMERING C Comparative assessment of large-scale data sets of protein-protein interactions NATURE 417 : 399 2002 From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 16:22:02 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 16:22:02 -0500 Subject: Scully, C "The positive and negative impacts, and dangers of the impact factor" Community Dental Health 24(3):130-134, September 2007 Message-ID: E-mail Address: c.scully at eastman.ucl.ac.uk Title: The positive and negative impacts, and dangers of the impact factor Authors: Scully, C (Cnispian) Source: COMMUNITY DENTAL HEALTH 24 (3): 130-134 SEP 2007 Language: English Document Type: Editorial Material Cited Reference Count: 25 Times Cited: 0 Abstract: The journal impact factor (IF) is widely used but surrounded by considerable controversy. It is important to restrict it to only its appropriate use. The IF can reasonably be useful for evaluating a journal, but even then can be influenced by many factors such as the number of review papers, letters or other types of material published, variations between disciplines, and various biases. The extent to which the IF is appropriate for the evaluation of the quality of an individual, department or institution however, is undoubtedly highly debatable. Reprint Address: Scully, C, UCL, Eastman Dent Inst, 256 Grays Inn Rd, London WC1X 8LD, England. Research Institution addresses: UCL, Eastman Dent Inst, London WC1X 8LD, England E-mail Address: c.scully at eastman.ucl.ac.uk Cited References: 2001, NATURE, V409, P745. 2005, NATURE, V435, P1003. ADAM D, 2002, NATURE, V415, P726. BLOCH S, 2001, AUST NZ J PSYCHIAT, V35, P563. BOLLEN J, SCIENTOMETRICS, V69. CHEW FS, 1988, AM J ROENTGENOL, V150, P31. EGGHE L, 2006, SCIENTOMETRICS, V69, P131. GARFIELD E, 1970, NATURE, V227, P669. GARFIELD E, 1972, CURR CONTENTS, V1, P270. GARFIELD E, 1972, SCIENCE, V178, P471. GARFIELD E, 1986, ANN INTERN MED, V105, P313. GARFIELD E, 1998, UNFALLCHIRURG, V101, P413. HIRSCH JE, 2005, ARXIVPHYSICS0508025, V5. JIN B, 2007, ISSI NEWSLETTER, V3, P6. KALTENBORN KF, 2003, MED KLIN, V98, P153. KRAUZE TK, 1971, J AM SOC INFORM SCI, V22, P333. MOED HF, 1995, J AM SOC INFORM SCI, V46, P461. MOLLER AP, 1990, NATURE, P348. PABLO D, 2006, SCIENTOMETRICS, V68, P179. SAPER CB, 1999, J COMP NEUROL, V411, P1. SCULLY C, 2005, BRIT DENT J, V198, P391. SEGLEN PO, 1997, BRIT MED J, V314, P498. SIDIROPOULOS A, 2006, ARXIVCSDL0607066, V6. SIMKIN MV, 2003, COMPLEX SYSTEMS, V14, P269. TALAMANCA AF, 2002, B GROUP INT RECH SCI, V44, P2. Publisher: F D I WORLD DENTAL PRESS LTD; 5 BATTERY GREEN RD, LOWESTOFT NR32 1 DE, SUFFOLK, ENGLAND Subject Category: Dentistry, Oral Surgery & Medicine ISSN: 0265-539X IDS Number: 213RL From garfield at CODEX.CIS.UPENN.EDU Wed Nov 28 16:26:51 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Wed, 28 Nov 2007 16:26:51 -0500 Subject: McDonald RJ, Cloft HJ, Kallmes DF. "Fate of submitted manuscripts rejected from the American journal of neuroradiology: Outcomes and commentary " AMERICAN JOURNAL OF NEURORADIOLOGY 28 (8): 1430-1434 SEP 2007 Message-ID: E-mail Addresses: kallmes.david at mayo.edu Title: Fate of submitted manuscripts rejected from the American journal of neuroradiology: Outcomes and commentary Author(s): McDonald RJ (McDonald, R. J.), Cloft HJ (Cloft, H. J.), Kallmes DF (Kallmes, D. F.) Source: AMERICAN JOURNAL OF NEURORADIOLOGY 28 (8): 1430-1434 SEP 2007 Document Type: Article Language: English Cited References: 5 Times Cited: 0 Abstract: BACKGROUND AND PURPOSE: The purpose of this study was to determine the publication fate of submissions previously rejected from the American Journal of Neuroradiology (AJNR) to provide guidance to authors who receive rejection notices. MATERIALS AND METHODS: A retrospective search by using MEDLINE of all submissions rejected from AJNR in 2004 was performed to identify subsequently published manuscripts. The fate of subsequently published manuscripts was analyzed as a function of submission type (major study, technical note, or case report), publication delay, publishing journal type (neuroradiology, general radiology, or clinical neuroscience journal), impact factor, publication volume, and circulation volume. RESULTS: Of the 554 rejected submissions to AJNR, 315 (56%) were subsequently published in 115 different journals, with the journal Neuroradiology publishing the greatest number of articles (37 t12%] of 315). The mean publication delay was 15.8 +/- 7.5 months. Major studies were more likely than case reports to be subsequently published (P =.034), but all 3 subtypes were published at rates greater than 50%. Radiologic journals collectively published approximately 60% of subsequent publications, whereas neurosurgery and neurology journals published 27% of rejected manuscripts. The mean impact factor of journals subsequently publishing rejected manuscripts was 1.8 +/- 1.3 (AJNR = 2.5), and 24 (7.5%) manuscripts were subsequently published in journals with higher impact factors than AJNR. CONCLUSIONS: These findings should give hope to authors receiving a rejection from AJNR, because greater than 50% of articles rejected from AJNR are subsequently published within 2-3 years, irrespective of publication type, into high-quality journals. KeyWords Plus: PUBLICATION; IMPACT Addresses: Kallmes DF (reprint author), Mayo Clin, Dept Radiol, 200 1st St SW, Rochester, MN 55905 USA Mayo Clin, Dept Radiol, Rochester, MN 55905 USA Mayo Clin, Coll Med, Med Scientist Training Program, Rochester, MN USA E-mail Addresses: kallmes.david at mayo.edu Publisher: AMER SOC NEURORADIOLOGY, 2210 MIDWEST RD, OAK BROOK, IL 60521 USA IDS Number: 213DC ISSN: 0195-6108 CITED REFERENCES : 2005 SCI J CITATIONS : 2006 CHEW FS FATE OF MANUSCRIPTS REJECTED FOR PUBLICATION IN THE AJR AMERICAN JOURNAL OF ROENTGENOLOGY 156 : 627 1991 GARFIELD E The history and meaning of the journal impact factor JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 295 : 90 2006 GARFIELD E CITATION ANALYSIS AS A TOOL IN JOURNAL EVALUATION - JOURNALS CAN BE RANKED BY FREQUENCY AND IMPACT OF CITATIONS FOR SCIENCE POLICY STUDIES SCIENCE 178 : 471 1972 MARX WF The fate of neuroradiologic abstracts presented at national meetings in 1993: Rate of subsequent publication in peer-reviewed, indexed journals AMERICAN JOURNAL OF NEURORADIOLOGY 20 : 1173 1999 From isidro at CINDOC.CSIC.ES Thu Nov 29 09:04:31 2007 From: isidro at CINDOC.CSIC.ES (Isidro F. Aguillo) Date: Thu, 29 Nov 2007 15:04:31 +0100 Subject: The b-index: New paper in Cybermetrics Message-ID: The b index as a measure of scientific excellence. A promising supplement to the h index Lutz Bornmann, R?diger Mutz, Hans-Dieter Daniel Cybermetrics (Nov 2007), 11(1): paper 6 We propose the b index as a measure of scientific excellence at the micro and meso levels, as a promising supplement to the h index and its variants (such as g index and R index) http://www.cindoc.csic.es/cybermetrics/articles/v11i1p6.html -- ========================== Isidro F. Aguillo laboratorio de Cibermetria isidro @ cindoc.csic.es CINDOC - CSIC Joaqu?n Costa, 22 28002 Madrid. Spain 34-91-5635482 ext 313 ========================== From garfield at CODEX.CIS.UPENN.EDU Thu Nov 29 10:07:51 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 29 Nov 2007 10:07:51 -0500 Subject: Sandstrom, Pamela Effrein, and Howard D. White. 2007. The impact of cultural materialism: A bibliometric analysis of the writings of Marvin Harris. Message-ID: Email address: sandstrp at ipfw.edu Sandstrom, Pamela Effrein, and Howard D. White. 2007. The impact of cultural materialism: A bibliometric analysis of the writings of Marvin Harris. In Lawrence A. Kuznar and Stephen K. Sanderson (eds). Studying societies and cultures: Marvin Harris's cultural materialism and its legacy. Boulder, CO: Paradigm Publishers. 20-55. Assessing the impact of a scholar on a field of study is daunting, especially when that scholar is as prolific and controversial as Marvin Harris. This chapter is the first in a volume aimed at appraising the significance of the cultural materialist research strategy to which Harris dedicated his life. We will document Harris's publishing career and demonstrate his influence in anthropology and cognate fields using bibliometric techniques developed by information scientists. These techniques are nicknamed CAMEOs, short for "Characterizations Automatically Made and Edited Online." They make visible the authors that Harris cited and reveal the researchers who cited him in turn. They also suggest his topical range. By modeling the professional interests of Harris and those interested in his work, we glimpse into the social and intellectual structure of contemporary social and behavioral science and trace Harris's impact within and across disciplinary boundaries. This CAMEO portrait of Harris provides a systematic and empirically based look at the content and breadth of cultural materialism. It is designed to reflect the same scientific methodological requirements that were near and dear to Harris's research program. From garfield at CODEX.CIS.UPENN.EDU Thu Nov 29 14:01:25 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 29 Nov 2007 14:01:25 -0500 Subject: Rethlefsen ML, Wallis LCPublic health citation patterns: an analysis of the American Journal of Public Health, 2003-2005 JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 95 (4): 408-415 OCT 2007 Message-ID: E-mail Addresses: mlrethlefsen at gmail.com, l-wallis at neiu.edu Title: Public health citation patterns: an analysis of the American Journal of Public Health, 2003-2005 Author(s): Rethlefsen ML (Rethlefsen, Melissa L.), Wallis LC (Wallis, Lisa C.) Source: JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 95 (4): 408-415 OCT 2007 Document Type: Article Language: English Cited References: 43 Times Cited: 0 Abstract: Objectives: The research sought to determine the publication types cited most often in public health as well as the most heavily cited journal titles. Methods: From a pool of 33,449 citations in 934 articles published in the 2003-2005 issues of American Journal of Public Health, 2 random samples were drawn: one (n = 1,034) from the total set of citations and one (n = 1,016) from the citations to journal articles. For each sampled citation, investigators noted publication type, publication date, uniform resource locator (URL) citation (yes/no), and, for the journal article sample, journal titles. The cited journal titles were analyzed using Bradford zones. Results: The majority of cited items from the overall sample of 1,034 items were journal articles (64.4%, n = 666), followed by government documents (n 130), books (n = 122), and miscellaneous sources (n = 116). Publication date ranged from 1826-2005 (mean = 1995, mode = 2002). Most cited items were between 0 and 5 years old (50.3%, n = 512). In the sample of 1,016 journal article citations, a total of 387 journal titles were cited. Discussion: Analysis of cited material types revealed results similar to citation analyses in specific public health disciplines, including use of materials from a wide range of disciplines, reliance on miscellaneous and government documents, and need for older publications. KeyWords Plus: BIBLIOGRAPHIC IMPACT-FACTOR; BIBLIOMETRIC ANALYSIS; RESEARCH PRODUCTIVITY; INFECTIOUS-DISEASES; MEDICINE JOURNALS; INFORMATION USE; TRENDS; FIELD; LIST Addresses: Rethlefsen ML (reprint author), Mayo Clin Lib, 200 1st St SW, Rochester, MN 55905 USA Mayo Clin Lib, Rochester, MN 55905 USA NE Illinois Univ, Ronald Williams Lib, Chicago, IL 60625 USA E-mail Addresses: mlrethlefsen at gmail.com, l-wallis at neiu.edu Publisher: MEDICAL LIBRARY ASSOC, 65 EAST WACKER PLACE, STE 1900, CHICAGO, IL 60601-7298 USA Subject Category: Information Science & Library Science ISSN: 1536-5050 Cited References: *AM PUBL HLTH ASS, 2006, J. *CREAT RES SYST, 2003, SAMP SIZ CALC. *PUBL HLTH HLTH AD, 2006, COR PUBL HLTH J VERS. *TASK FORC MAPP NU, 2006, NURS ALL HLTH RES SE. ALLEN MP, 2006, J MED LIBR ASSOC, V94, P206. ALLEN MP, 2006, J MED LIBR ASSOC, V94, E43. ALLISON MM, 2006, J MED LIBR ASSOC, V94, E74. ALPI KM, 2007, J MED LIBR ASSOC, V95, E6. BIRADAR BS, 2000, SRELS J INFORM MANAG, V37, P199. BLIZIOTIS IA, 2005, BMC INFECT DIS, V5. BURRIGHT MA, 2005, COLL RES LIBR, V66, P198. CAMERON BD, 2005, PORTAL-LIBR ACAD, V5, P105. CRAWLEYLOW J, 2006, J MED LIBR ASSOC, V94, P430. FALAGAS ME, 2005, J MED VIROL, V76, P229. FALAGAS ME, 2006, BMC INFECT DIS, V6. FRANKS AL, 2006, AM J PREV MED, V30, P211. GEBBIE K, 2000, PUBLIC HLTH WORK FOR. GEHANNO JF, 2000, OCCUP ENVIRON MED, V57, P706. GEORGAS H, 2005, COLL RES LIBR, V66, P496. HASBROUCK LM, 2003, AM J EPIDEMIOL, V157, P399. HUA Y, 2005, MEM I OSWALDO CRUZ, V100, P805. KELSEY P, 2003, COLL RES LIBR, V64, P357. KNIEVEL JE, 2005, LIBR QUART, V75, P142. KUSHKOWSKI JD, 2003, PORTAL-LIBR ACAD, V3, P459. LOPEZABENTE G, 2005, BMC PUBLIC HEALTH, V5. MOORBATH P, 1993, ASLIB P, V45, P39. ORTEGA L, 2006, COLL RES LIBR, V67, P446. PORTA M, 1996, J EPIDEMIOL COMMUN H, V50, P606. PORTA M, 2003, CAD SAUDE PUBLICA, V19, P1847. PORTA M, 2004, SOZ PRAVENTIV MED, V49, P15. RAMOS JM, 2004, EUR J CLIN MICROBIOL, V23, P180. RETHLEFSEN ML, 2006, 106 ANN M MED LIB AS. REVERE D, 2007, IN PRESS J BIOMED IN. SCHLOMAN BF, 1997, B MED LIBR ASSOC, V85, P278. SCHOONBAERT D, 2004, TROP MED INT HEALTH, V9, P1142. SMITH ET, 2003, COLL RES LIBR, V64, P344. SOTERIADES ES, 2006, SOZ PREVENTIVMED, V6, P301. SPASSER MA, 2006, J MED LIBR ASSOC, V94, E137. TAYLOR MK, 2007, J MED LIBR ASSOC, V95, E58. THOMPSON ISI, 2006, 2004 JCR SCI. THOMPSON ISI, 2006, 2005 JCR SCI EDITION. THOMPSON ISI, 2006, JCR SOCIAL SCI EDITI. VERGIDIS PI, 2005, EUR J CLIN MICROBIOL, V24, P342. From garfield at CODEX.CIS.UPENN.EDU Thu Nov 29 14:04:18 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Thu, 29 Nov 2007 14:04:18 -0500 Subject: Dee CR, The development of the medical literature analysis and retrieval system(MEDLARS), JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 95 (4): 416-425 OCT 2007 Message-ID: E-mail Addresses: cdee at cas.usf.edu Title: The development of the medical literature analysis and retrieval system(MEDLARS) Author(s): Dee CR (Dee, Cheryl Rae) Source: JOURNAL OF THE MEDICAL LIBRARY ASSOCIATION 95 (4): 416-425 OCT 2007 Document Type: Article Language: English Cited References: 66 Times Cited: 0 Abstract: Objective: The research provides a chronology of the US National Library of Medicine's (NLM's) contribution to access to the world's biomedical literature through its computerization of biomedical indexes, particularly the Medical Literature Analysis and Retrieval System (MEDLARS). DOI: 10.3163/1536-5050.95.4.416 mechanized bibliographic retrieval systems of the 1940s and to the beginnings of online, interactive computerized bibliographic search systems of the early 1970s chronicled here, NLM's contributions to automation and bibliographic retrieval have been extensive. Method: Using material gathered from NLM's archives and from personal interviews with people associated with developing MEDLARS, and its associated systems, the author discusses key events in the history of MEDLARS. Discussion: From the development of the early mechanized bibliographic retrieval systems of the 1940s and to the beginnings of online, interactive computerized bibliographic search systems of the early 1970s chronicled here, NLM's contributions to automation and bibliographic retrieval have been extensive. Conclusion: As NLM's technological experience and expertise grew, innovative bibliographic storage and retrieval systems emerged. NLM's accomplishments regarding MEDLARS were cutting edge, placing the library at the forefront of incorporating mechanization and technologies into medical information systems. KeyWords Plus: MEDLARS Addresses: Dee CR (reprint author), Univ S Florida, Sch Lib & Informat Sci, Lakeland, FL 33803 USA Univ S Florida, Sch Lib & Informat Sci, Lakeland, FL 33803 USA E-mail Addresses: cdee at cas.usf.edu Publisher: MEDICAL LIBRARY ASSOC, 65 EAST WACKER PLACE, STE 1900, CHICAGO, IL 60601-7298 USA Subject Category: Information Science & Library Science ISSN: 1536-5050 Cited References: 1964, WALL STREET J 0803, P12. *AM MED ASS, 1927, Q CUM IND MED. *ARM MED LIB, 1936, IND CAT LIB SURG GEN. *LIB SURG GEN OFF, 1880, IND CAT LIB SURG GEN. *NAT LIB MED IND M, 1961, B MED LIB ASS, V49, P1. *NAT LIB MED, 2001, MEDLARS EV ADV COMM, P68. *NAT LIB MED, 2007, FACT SHEET MEDLINE. *NAT LIB MED, 2007, FACT SHEET PUBMED ME. *US ARM MED LIB, 1941, CURR LIST MED LIT. *US NAT LIB MED AR, 1964, UNPUB EV STAR 0803. *US NAT LIB MED AR, 1964, UNPUB PHOT ZIP MOD 9. *US NAT LIB MED, 1963, MEDLARS STOR NAT LIB, P1. *US NAT LIB MED, 1964, FACT SHEET MED LIT A, P1. *US NAT LIB MED, 1964, GRAPH ARTS COMP EQUI. *US NAT LIB MED, 1966, GUID MEDLARS SERV, P1. *US NAT LIB MED, 1970, PRINC MEDLARD, P1. *US NAT LIB MED, 2005, ANN REP FISC YEAR 19. *US NAT LIB MED, 2005, NLM FUNCT STAT. ADAMS S, 1972, B MED LIB ASS, V60, P523. ADAMS S, 1981, MED BIBLIOGRAPHY AGE. AUSTIN CJ, 1963, S COMP ITS POT MED C. AUSTIN CJ, 1968, MEDLARS 1963 1967, P1. BILLINGS JS, 1879, INDEX MEDICUS. BOURNE CP, 2003, HIST ONLINE INFORM S. BRODMAN E, 1954, DEV MED BIBLIOGRAPHY. CUMMINGS MM, 1964, B MED LIB ASS, V52, P159. CUMMINGS MM, 1964, C EL INF HAND PITTS. CUMMINGS MM, 1964, SUBC COMM APPR HOUS, P592. CUMMINGS MM, 1965, 7 MED S SPONS INT BU. CUMMINGS MM, 1965, UNPUB AM MED WRIT AS. CUMMINGS MM, 1965, UNPUB IN SCI PROGR R. CUMMINGS MM, 1965, UNPUB PERSONAL CORRE. CUMMINGS MM, 1965, UNPUB PLAC COMP ORG. CUMMINGS MM, 1965, US HOUSE DEP LAB HLT, P754. CUMMINGS MM, 1966, HOUSE HEARINGS 0306, P784. CUMMINGS MM, 1967, UNPUB 95 AM LIB ASS. CUMMINGS MM, 1967, UNPUB STAFF C U ILL. CUMMINGS MM, 1967, US HOUSE DEP LAB HLT, P632. CUMMINGS MM, 1968, EVALUATION MEDLARS D. CUMMINGS MM, 1968, HOUSE HEARING, P1071. CUMMINGS MM, 1969, US HOUSE DEP LAB HLT, P1126. CUMMINGS MM, 1970, HOUSE HEARING, P1324. CUMMINGS MM, 1972, HOUSE HEARING, P1493. CUMMINGS MM, 1978, UNPUB NATL LIB MED Q. CUMMINGS MM, 1986, NAT LIB MED PAST PRE, P27. DEE CR, 2004, COMMUNICATION. DEE CR, 2005, COMMUNICATION 0727. DEE CR, 2005, COMMUNICATION 0728. DEE CR, 2005, COMMUNICATION. FUMMINGS MM, 1965, UNPUB 50 ANN CEL HOU. FUMMINGS MM, 1965, UNPUB AM ASS ADV SCI. FUMMINGS MM, 1969, UNPUB C LIB INF SCI. GARFIELD E, 1979, ESSAYS INFORM SCI, V4, P341. GARRARD RF, 1964, PERIOD, V17, P1. LANCASTER FW, 1968, EVALUTAION MEDLARS D, P1. LARKEY SV, 1953, B MED LIB ASS, V41, P32. LUIGART FW, 1964, COURIER J MAGAZ 1122, P25. MACARN DB, 1973, SCIENCE, V181, P318. METCALF KD, 1944, NAT MED LIB REP ARMY. MILED WD, 1982, HIST NAT LIB MED NAT. OPPENHEIMER GJ, 1987, COMMUNICATION 0623. ROGERS FB, 1964, B MED LIB ASS, V52, P150. ROGERS FB, 1982, HENRY E SIGERIST S B, P77. TAINE SI, 1959, B MED LIB ASS, V47, P117. TAINE SI, 1963, UNPUB 2 INT C MED LI, P1. WILLIAMS RV, 1987, COMMUNICATION 0609. From loet at LEYDESDORFF.NET Fri Nov 30 02:01:26 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 30 Nov 2007 08:01:26 +0100 Subject: SJR Portal In-Reply-To: <13706.81.33.31.60.1195927462.squirrel@goliat7.ugr.es> Message-ID: Dear Felix, I could not resist the temptation to correlate rank orders of the 229 countries for the different indicators. The results are as follows: Spearman?s Rho Correlations Citable Documents Cites Self-Cites Non-self cites Cites per Doc H index Citable Documents Correlation Coefficient 1.000 .945(**) .956(**) .860(**) .113 .958(**) Sig. (2-tailed) . .000 .000 .000 .088 .000 N 229 229 229 229 229 229 Cites Correlation Coefficient .945(**) 1.000 .960(**) .957(**) .236(**) .971(**) Sig. (2-tailed) .000 . .000 .000 .000 .000 N 229 229 229 229 229 229 Self-Cites Correlation Coefficient .956(**) .960(**) 1.000 .873(**) .188(**) .970(**) Sig. (2-tailed) .000 .000 . .000 .004 .000 N 229 229 229 229 229 229 Non-self cites Correlation Coefficient .860(**) .957(**) .873(**) 1.000 .245(**) .891(**) Sig. (2-tailed) .000 .000 .000 . .000 .000 N 229 229 229 229 229 229 Cites per Doc Correlation Coefficient .113 .236(**) .188(**) .245(**) 1.000 .277(**) Sig. (2-tailed) .088 .000 .004 .000 . .000 N 229 229 229 229 229 229 H index Correlation Coefficient .958(**) .971(**) .970(**) .891(**) .277(**) 1.000 Sig. (2-tailed) .000 .000 .000 .000 .000 . N 229 229 229 229 229 229 ** Correlation is significant at the 0.01 level (2-tailed). The significances are perhaps generated by the large N. As can be expected citations and publications are highly correlated as indicators at this level of aggregation. These indicators are also highly correlated with the H-index, but much less so with c/p. Some small countries (notably islands) are high on this indicator, but also the Switzerland, the USA, the Scandinavian countries and the Netherlands: 1. British Indian Ocean Territory 2. United States Minor Outlying Islands 3. Bermuda 4. Faroe Islands 5. Guinea-Bissau 6. San Marino 7. Panama 8. Haiti 9. Gambia 10. Virgin Islands (British) 11. Switzerland 12. Saint Lucia 13. Iceland 14. United States 15. Denmark 16. Netherlands 17. Seychelles 18. Sweden 19. Finland 20. Montserrat 21. Canada 22. United Kingdom 23. Greenland 24. Belgium 25. Israel It provides a bit of a different perspective on the "wealth of nations", doesn't it? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of F?lix de Moya Aneg?n > Sent: Saturday, November 24, 2007 7:04 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: [SIGMETRICS] SJR Portal > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > Dear collegue, > > We are very glad to announce the launch of SJR (SCImago > Journal & Country > Rank) portal. > SJR portal is based on the Scopus? data and include the > SCImago Journal > Rank Indicator. This portal makes rankings by subject areas > or subject > categories showing the visibility of journals and countries through > scientific indicators like SJR, H-index, Total docs., Total > refs., Total > cites, Citable docs., Cites per docs., Self-citation, etc., > since 1996. > These indicators have been calculated from the information > exported from > the Scopus? database on March 2007 and will be updated > periodically. For > this reason some of the figures showed in the SJR portal and > Scopus? may > be not match. The coverage period at this moment of country > and journal > indicators is from 1996 to 2006. > > The platform is freely available at: http://www.scimagojr.com > > Please, any comments or suggestions will be welcomed. > > Best wishes > > > ******************************* > F?lix de Moya Aneg?n > http://www.ugr.es/~felix/ > Grupo SCIMAGO > http://www.scimago.es > http://www.atlasofscience.net > Universidad de Granada > ******************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.ohly at GESIS.ORG Fri Nov 30 11:38:08 2007 From: peter.ohly at GESIS.ORG (Ohly, H. Peter) Date: Fri, 30 Nov 2007 17:38:08 +0100 Subject: Call for Papers: 'Information and Evaluation', Naples September 1-5, 2008 In-Reply-To: Message-ID: Session 'Information and Evaluation' on the 7th RC33 International Conference on Social Science Methodology, Naples September 1-5, 2008. Session Organizers: H. Peter Ohly (peter.ohly at gesis.org); Max Stempfhuber (max.stempfhuber at gesis.org) Information and its dissemination are seen as important factors of productivity in global exchange and competition. Here, information is primarily understood as scientific knowledge and research results - which might enhance science itself as well as application domains. On the one hand the question arises, how such information can be acquired, processed and distributed optimally. Approaches such as user participation, data accumulation, value adding and qualitative filtering are of concern. On the other hand, the information on scientific outcomes is used to judge about the structures of disciplines, developments in research and excellence of institutions and individual scientists. This questions the reliability of information and its sources, completeness, comparability and validity of data, as well as the role of indicators for positional judgements. This session targets at the improvement of information transfer as well as diagnostic procedures on information databases and their mutual relationship. Of interest is also, how the scientific community adapts in this context. Please send your abstract as soon as possible to session organizers. The abstract should not be longer than 250 words and it should indicate your name, your email address, your institutional affiliation and up to three keywords. Deadline for abstracts: February 17, 2008. The session organizers or the organisation committee will inform you about the acceptance by the end of March at the latest. (for details: http://www.rc332008.unina.it/ e.g. CfP) Mit freundlichen Gruessen, With kind regards, Sinc?res salutations, H. Peter OHLY ------------------------------------- GESIS / IZ Sozialwissenschaften / Lennestr. 30 / 53113 BONN / Germany / Tel.: +49-228-2281-542 / Fax.: +49-228-2281-4542 / mailto:peter.ohly at gesis.org / http://www.gesis.org/SocioGuide / http://www.bonn.iz-soz.de/wiss-org Visitors Address: GESIS / IZ Sozialwissenschaften / Produkte+Marketing / Dreizehnmorgenweg 42 / 53175 BONN (Metro-Stop: Platz der Vereinten Nationen) (Important: please take notice of the new telephone and FAX numbers!) -------------- next part -------------- An HTML attachment was scrubbed... URL: