From loet at LEYDESDORFF.NET Fri Jun 1 03:52:44 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 1 Jun 2007 09:52:44 +0200 Subject: ISTIC Journal Citation Reports 2005 Message-ID: ???????????? ?? ????????????????????????/?????Pajek?????? ?????? ? ?????? ???????????? ? ?????????/??????????? 2003 2004 2005 ???? ???? ???? ???? ???? ???? ???????2003??2004????????1%??2003???????????? ????/???????????? ???????????????1%????? ??????? ?????????????????????????????/? ??????????? ?????????????????????????? ????????/???????????2004???????????? ???? ??????1%??????????? One can click on any of the journal names in the corresponding box and obtain the Pajek file corresponding to the citation environment of the journal ("citing" or "cited"). See for further explanation: Zhou Ping & Loet Leydesdorff, ?????????????? [Visualization of the Citation Impact Environments in the CSTPC Journal Set] with a manual at < ???? ??????/??????????> , ???????? , Chinese Journal of Scientific and Technical Periodicals, 16(6) (2005) 773-780; Zhou, Ping & Loet Leydesdorff, A Comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and inter-journal citation relations. Journal of the American Society for Information Science and Technology 58(2), 223-236, 2007; >. Please, provide a reference if you use the information 2003 2004 2005 cited journal files cited journal files cited journal files citing journal files citing journal files citing journal files _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated. 385 pp.; US$ 18.95 The Self-Organization of the Knowledge-Based Society; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From loet at LEYDESDORFF.NET Fri Jun 1 03:55:53 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Fri, 1 Jun 2007 09:55:53 +0200 Subject: Chinese Journal Citations Report (ISTIC) 2005 Message-ID: ???????????? ?? ????????????????????????/?????Pajek?????? ?????? ??????????????? ??????????????/ ?????????? ? 2003 2004 2005 ???? ???? ???? ???? ???? ???? ???????2003??2004????????1%??2003???????????? ????/???????????? ???????????????1%????? ??????? ?????????????????????????????/? ??????????? ?????????????????????????? ????????/???????????2004???????????? ???? ??????1%??????????? One can click on any of the journal names in the corresponding box and obtain the Pajek file corresponding to the citation environment of the journal ("citing" or "cited"). See for further explanation: Zhou Ping & Loet Leydesdorff, ??? ??????????? [Visualization of the Citation Impact Environments in the CSTPC Journal Set] with a manual at > , ???????? , Chinese Journal of Scientific and Technical Periodicals, 16(6) (2005) 773-780; Zhou, Ping & Loet Leydesdorff, A Comparison between the China Scientific and Technical Papers and Citations Database and the Science Citation Index in terms of journal hierarchies and inter-journal citation relations. Journal of the American Society for Information Science and Technology 58(2), 223-236, 2007; >. Please, provide a reference if you use the information 2003 2004 2005 cited journal files cited journal files cited journal files citing journal files citing journal files citing journal files ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff. net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated . 385 pp.; US$ 18.95 The Self-Organization of the Knowledge-Based Society ; The Challenge of Scientometrics From eugene.garfield at THOMSON.COM Sat Jun 2 15:09:24 2007 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Sat, 2 Jun 2007 15:09:24 -0400 Subject: Journals accessed by primary care physicians Message-ID: Which journals do primary care physicians and specialists access from an online service? K. Ann McKibbon, MLS, PhD; R. Brian Haynes, MD, PhD; R. James McKinlay, MSc; Cynthia, Lokker, PhD J Med Libr Assoc Jul;95(3):2007 When responding, please attach my original message __________________________________________________ Eugene Garfield, PhD. email: garfield at codex.cis.upenn.edu home page: www.eugenegarfield.org Tel: 215-243-2205 Fax 215-387-1266 President, The Scientist LLC. www.the-scientist.com 400 Market St., Suite 1250 Phila. PA 19106- Chairman Emeritus, ISI www.isinet.com 3501 Market Street, Philadelphia, PA 19104-3302 Past President, American Society for Information Science and Technology (ASIS&T) www.asis.org No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.8.6/828 - Release Date: 6/1/2007 11:22 AM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: july07_mckibbon_preprint.pdf Type: application/octet-stream Size: 104857 bytes Desc: july07_mckibbon_preprint.pdf URL: From eugene.garfield at THOMSON.COM Sat Jun 2 16:05:03 2007 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Sat, 2 Jun 2007 16:05:03 -0400 Subject: Bradford's Law Challenged Message-ID: -------------------------------------------------------------------------- TITLE: Practical potentials of Bradford's law: a critical examination of the received view (Review, English) AUTHOR: Nicolaisen, J; Hjorland, B SOURCE: JOURNAL OF DOCUMENTATION 63 (3). 2007. p.359-377 EMERALD GROUP PUBLISHING LIMITED, BRADFORD ; ABSTRACT: Purpose - The purpose of this research is to examine the practical potentials of Bradford's law in relation to core-journal identification. Design/methodollogy/approach - Literature studies and empirical tests (Bradford analyses). Findings - Literature studies reveal that the concept of "subject" has never been explicitly addressed in relation to Bradford's law. The results of two empirical tests (Bradford analyses) demonstrate that different operationalizations of the concept of "subject" produce quite different lists of core-journals. Further, an empirical test reveals that Bradford analyses function discriminatorily against minority views. Practical implications - Bradford analysis can no longer be regarded as an objective and neutral method. The received view on Bradford's law needs to be revised. Originality/value - The paper questions one of the old dogmas of the field. AUTHOR ADDRESS: J Nicolaisen, Royal Sch Lib & Informat Sci, Copenhagen, Denmark No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.8.6/828 - Release Date: 6/1/2007 11:22 AM -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugene.garfield at THOMSON.COM Sat Jun 2 16:14:17 2007 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Sat, 2 Jun 2007 16:14:17 -0400 Subject: nanoscience and technology instrumentation literature Message-ID: TITLE: Structure of the nanoscience and nanotechnology instrumentation literature (Article, English) AUTHOR: Kostoff, RN; Koytcheff, RG; Lau, CGY SOURCE: CURRENT NANOSCIENCE 3 (2). MAY 2007. p.135-154 BENTHAM SCIENCE PUBL LTD, SHARJAH ABSTRACT: The instrumentation literature associated with nanoscience and nanotechnology research was examined. About 65000 nanotechnology records for 2005 were retrieved from the Science Citation Index/ Social Science Citation Index (SCI/SSCI) [1], and similar to 27000 of those were identified as instrumentation-related. All the diverse instruments were identified, and the relationships among the instruments, and among the instruments and the quantities they measure, were obtained. Metrics associated with research literatures for specific instruments/instrument groups were generated. AUTHOR ADDRESS: RN Kostoff, Off Naval Res, 875 N Randolph St, Arlington, VA 22217 USA No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.8.6/828 - Release Date: 6/1/2007 11:22 AM -------------- next part -------------- An HTML attachment was scrubbed... URL: From harnad at ECS.SOTON.AC.UK Sat Jun 2 20:49:19 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 3 Jun 2007 01:49:19 +0100 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) Message-ID: Academics strike back at spurious rankings D Butler, Nature 447, 514-515 (31 May 2007) doi:10.1038/447514b http://www.nature.com/nature/journal/v447/n7144/full/447514b.html This news item in Nature lists some of the (very valid) objections to the many unvalidated university rankings -- both subjective and objective -- that are in wide use today. These problems are all the more reason for extending Open Access (OA) and developing OA scientometrics, which will provide open, validatable and calibratable metrics for research, researchers, and institutions in each field -- a far richer, more sensitive, and more equitable spectrum of metrics than the few, weak and unvalidated measures available today. Some research groups that are doing relevant work on this are, in the UK: (1) our own OA scientometrics group at Southampton (and UQaM, Canada), and our collaborators Charles Oppenheim (Loughborough) and Arthur Sale (Tasmania); (2) Mike Thelwall (Wolverhampton); in the US: (3) Johan Bollen & Herbert van de Sompel at LANL; and in the Netherlands: (5) Henk Moed & Anton van Raan (Leiden; cited in the Nature news item). Below are excerpts from the Nature article, followed by some references. Universities seek reform of ratings. http://www.nature.com/nature/journal/v447/n7144/full/447514b.html [A] group of US colleges [called for a] boycott [of] the most influential university ranking in the United States... Experts argue that these are based on dubious methodology and spurious data, yet they have huge influence... "All current university rankings are flawed to some extent; most, fundamentally," The rankings in the U.S. News & World Report and those published by the British Times Higher Education Supplement (THES) depend heavily on surveys of thousands of experts - a system that some contest. A third popular ranking, by Jiao Tong University in Shanghai, China, is based on more quantitative measures, such as citations, numbers of Nobel prizewinners and publications in Nature and Science. But even these measures are not straightforward. Thomson Scientific's ISI citation data are notoriously poor for use in rankings; names of institutions are spelled differently from one article to the next, and university affiliations are sometimes omitted altogether. After cleaning up ISI data on all UK papers for such effects... the true number of papers from the University of Oxford, for example, [were] 40% higher than listed by ISI... Researchers at Leiden University in the Netherlands have similarly recompiled the ISI database for 400 universities: half a million papers per year. Their system produces various rankings based on different indicators. One, for example, weights citations on the basis of their scientific field, so that a university that does well in a heavily cited field doesn't get an artificial extra boost. The German Center for Higher Education Development (CHE) also offers rankings... for almost 300 German, Austrian and Swiss universities... the CHE is expanding the system to cover all Europe. The US Commission on the Future of Higher Education is considering creating a similar public database, which would offer competition to the U.S. News & World Report. --------------------------------------------------------------------------- Bollen, Johan and Herbert Van de Sompel. Mapping the structure of science through usage. Scientometrics, 69(2), 2006 http://dx.doi.org/10.1007/s11192-006-0151-8 Hardy, R., Oppenheim, C., Brody, T. and Hitchcock, S. (2005) Open Access Citation Information. http://eprints.ecs.soton.ac.uk/11536/ Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35. http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/ Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Invited Keynote, 11th Annual Meeting of the International Society for Scientometrics and Informetrics. Madrid, Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 Kousha, Kayvan and Thelwall, Mike (2006) Google Scholar Citations and Google Web/URL Citations: A Multi-Discipline Exploratory Analysis. In Proceedings International Workshop on Webometrics, Informetrics and Scientometrics & Seventh COLLNET Meeting, Nancy (France). http://eprints.rclis.org/archive/00006416/ Moed, H.F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. van Raan, A. (2007) Bibliometric statistical properties of the 100 largest European universities: prevalent scaling rules in the science system. Journal of the American Society for Information Science and Technology http://www.cwts.nl/Cwts/Stat4AX-JASIST.pdf Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ To join or leave the Forum or change your subscription address: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html UNIVERSITIES: If you have adopted or plan to adopt an institutional policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("gold"): Publish your article in a open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From loet at LEYDESDORFF.NET Sun Jun 3 03:33:20 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 3 Jun 2007 09:33:20 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: > "All current university rankings are flawed to some extent; most, > fundamentally," The problem is that institutions are not the right unit of analysis for the bibliometric comparison because citation and publication practices vary among disciplines and specialties. Universities are mixed bags. Our Leiden colleagues try to correct for this by normalizing on the journal set which the group uses itself, but one can also ask whether the group is using the best possible set given its research profile. Should one not first determine a journal set and then compare groups within it? Furthermore, Brewer et al. (2001) made the point that one should also distinguish between prestige and reputation. Reputation is field specific; prestige is more historical. (Brewer, D. J., Gates, S. M., & Goldman, C. A. (2001). In Pursuit of Prestige: Strategy and Competition in U.S. Higher Education. Piscataway, NJ: Transaction Publishers, Rutgers University.) Many of the evaluating teams are institutionally dependent on the contracts for the evaluations. Quis custodies custodes? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ From harnad at ECS.SOTON.AC.UK Sun Jun 3 07:56:55 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 3 Jun 2007 12:56:55 +0100 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: <002201c7a5b1$77caead0$1302a8c0@loet> Message-ID: On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > "All current university rankings are flawed to some extent; most, > > fundamentally," > > The problem is that institutions are not the right unit of analysis for the > bibliometric comparison because citation and publication practices vary > among disciplines and specialties. Universities are mixed bags. Yes and no. It is correct that the right unit of analysis is the field or even subfield of the research being compared. But it is also true that in comparing universities one is also comparing their field and subfield coverage. The general way to approach this problem is with a rich and diverse set of predictor metrics, in a joint multiple regression equation that can adjust the weightings of each depending on the field, and on the use to which the spectrum of metrics is being put: There can, for example, be "discipline coverage" metrics (from narrow to wide) as well as "field size" and "institutional size" metrics, whose regression weights can be adjusted depending on what it is that the equation is being used to predict, and hence to rank. The differential weightings can be validated against other means of ranking (including expert judgments). Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. Invited Keynote, 11th Annual Meeting of the International Society for Scientometrics and Informetrics. Madrid, Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 > Our Leiden colleagues try to correct for this by normalizing on the journal > set which the group uses itself, but one can also ask whether the group is > using the best possible set given its research profile. Should one not first > determine a journal set and then compare groups within it? The three things that are needed are (1) a far richer and more diverse set of potential metrics, (2) insurance that like is being compared with like, and (3) validation of the ranking against face-valid external criteria, so that the metrics can eventually function as benchmarks and norms. None of this can be done a priori; the methodology is similar to the methodology of validating batteries of psychometric or biometric tests: Correlate the joint set of metrics with external, face-valid criteria, and adjust their respective weights accordingly. It is unlikely, however, that the relevant and predictive frame of reference and basis of comparison will be journal sets. Breadth/narrowness of journal coverage is just one among many, many potential parameters. The interest is in comparing researchers and research groups or institutions, within or across fields. The journal does carry some predictive and normative power in this, and it is one indirect way of equating for field, but it is one among many ways that one might wish to weight -- or equate -- metrics, particularly in an Open Access database in which all journals (and all individual articles and all individual researchers, and their respective download, citation, co-citation, hub/authority, consanguinity, chronometric, and many other metrics are all available for weighting, equating, and validating). What we have to remember is that the imminent Open Access (OA) world is incomparably wider and richer -- and more open -- than the narrow, impoverished classical-ISI world to which we were constrained in the Closed Access paper-based era. > Furthermore, Brewer et al. (2001) made the point that one should also > distinguish between prestige and reputation. Reputation is field specific; > prestige is more historical. (Brewer, D. J., Gates, S. M., & Goldman, C. A. > (2001). In Pursuit of Prestige: Strategy and Competition in U.S. Higher > Education. Piscataway, NJ: Transaction Publishers, Rutgers University.) This is still narrow journal- and journal-average-centred thinking. Yes, journals will still be the entities in which papers are published, and journals will vary both in their field of coverage and their quality, and this can and will be taken into account. But those variables constitute only a small fraction of OA scientometric and semiometric space. Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chandos. http://eprints.ecs.soton.ac.uk/12453/ > Many of the evaluating teams are institutionally dependent on the contracts > for the evaluations. Quis custodies custodes? OA itself is transparency's, diversity's and equitability's best defender. Stevan Harnad AMERICAN SCIENTIST OPEN ACCESS FORUM: http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ UNIVERSITIES and RESEARCH FUNDERS: If you have adopted or plan to adopt an policy of providing Open Access to your own research article output, please describe your policy at: http://www.eprints.org/signup/sign.php http://openaccess.eprints.org/index.php?/archives/71-guid.html http://openaccess.eprints.org/index.php?/archives/136-guid.html OPEN-ACCESS-PROVISION POLICY: BOAI-1 ("Green"): Publish your article in a suitable toll-access journal http://romeo.eprints.org/ OR BOAI-2 ("Gold"): Publish your article in an open-access journal if/when a suitable one exists. http://www.doaj.org/ AND in BOTH cases self-archive a supplementary version of your article in your own institutional repository. http://www.eprints.org/self-faq/ http://archives.eprints.org/ http://openaccess.eprints.org/ From loet at LEYDESDORFF.NET Sun Jun 3 08:24:37 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 3 Jun 2007 14:24:37 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: OK. Let's assume that we need a structural equation model in which journals are one of the predictive variables. Since one wishes (in the Nature article) to compare Oxford and Cambridge with Lausanne and Leiden, nation should be another independent variable. You also wish to take expert judgement (peer review) as a predictor? But what would be the dependent (predicted) variable? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > Sent: Sunday, June 03, 2007 1:57 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > rankings" (Nature, 31 May) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > > "All current university rankings are flawed to some > extent; most, > > > fundamentally," > > > > The problem is that institutions are not the right unit of > analysis for the > > bibliometric comparison because citation and publication > practices vary > > among disciplines and specialties. Universities are mixed bags. > > Yes and no. It is correct that the right unit of analysis is > the field or even > subfield of the research being compared. But it is also true > that in comparing > universities one is also comparing their field and subfield coverage. > > The general way to approach this problem is with a rich and > diverse set of > predictor metrics, in a joint multiple regression equation > that can adjust the > weightings of each depending on the field, and on the use to which the > spectrum of metrics is being put: There can, for example, be > "discipline > coverage" metrics (from narrow to wide) as well as "field size" and > "institutional size" metrics, whose regression weights can be adjusted > depending on what it is that the equation is being used to predict, > and hence to rank. The differential weightings can be > validated against > other means of ranking (including expert judgments). > > Harnad, S. (2007) Open Access Scientometrics and the UK Research > Assessment Exercise. Invited Keynote, 11th Annual Meeting of the > International Society for Scientometrics and Informetrics. Madrid, > Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 > > > Our Leiden colleagues try to correct for this by > normalizing on the journal > > set which the group uses itself, but one can also ask > whether the group is > > using the best possible set given its research profile. > Should one not first > > determine a journal set and then compare groups within it? > > The three things that are needed are (1) a far richer and > more diverse set of > potential metrics, (2) insurance that like is being compared > with like, and (3) > validation of the ranking against face-valid external > criteria, so that the > metrics can eventually function as benchmarks and norms. > > None of this can be done a priori; the methodology is similar to the > methodology of validating batteries of psychometric or > biometric tests: > Correlate the joint set of metrics with external, face-valid > criteria, and > adjust their respective weights accordingly. > > It is unlikely, however, that the relevant and predictive frame of > reference and basis of comparison will be journal sets. > Breadth/narrowness > of journal coverage is just one among many, many potential > parameters. The > interest is in comparing researchers and research groups or > institutions, > within or across fields. The journal does carry some predictive and > normative power in this, and it is one indirect way of > equating for field, > but it is one among many ways that one might wish to weight > -- or equate > -- metrics, particularly in an Open Access database in which > all journals > (and all individual articles and all individual researchers, and their > respective download, citation, co-citation, hub/authority, > consanguinity, > chronometric, and many other metrics are all available for weighting, > equating, and validating). > > What we have to remember is that the imminent Open Access (OA) world > is incomparably wider and richer -- and more open -- than the narrow, > impoverished classical-ISI world to which we were constrained in the > Closed Access paper-based era. > > > Furthermore, Brewer et al. (2001) made the point that one > should also > > distinguish between prestige and reputation. Reputation is > field specific; > > prestige is more historical. (Brewer, D. J., Gates, S. M., > & Goldman, C. A. > > (2001). In Pursuit of Prestige: Strategy and Competition in > U.S. Higher > > Education. Piscataway, NJ: Transaction Publishers, Rutgers > University.) > > This is still narrow journal- and journal-average-centred > thinking. Yes, > journals will still be the entities in which papers are > published, and journals > will vary both in their field of coverage and their quality, > and this can and > will be taken into account. But those variables constitute > only a small fraction > of OA scientometric and semiometric space. > > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open > Research Web: A Preview of the Optimal and the > Inevitable, in Jacobs, > N., Eds. Open Access: Key Strategic, Technical and > Economic Aspects, > Chandos. http://eprints.ecs.soton.ac.uk/12453/ > > > Many of the evaluating teams are institutionally dependent > on the contracts > > for the evaluations. Quis custodies custodes? > > OA itself is transparency's, diversity's and equitability's > best defender. > > Stevan Harnad > AMERICAN SCIENTIST OPEN ACCESS FORUM: > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > Access-Forum.html > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ > > UNIVERSITIES and RESEARCH FUNDERS: > If you have adopted or plan to adopt an policy of providing > Open Access > to your own research article output, please describe your policy at: > http://www.eprints.org/signup/sign.php > http://openaccess.eprints.org/index.php?/archives/71-guid.html > http://openaccess.eprints.org/index.php?/archives/136-guid.html > > OPEN-ACCESS-PROVISION POLICY: > BOAI-1 ("Green"): Publish your article in a suitable > toll-access journal > http://romeo.eprints.org/ > OR > BOAI-2 ("Gold"): Publish your article in an open-access > journal if/when > a suitable one exists. > http://www.doaj.org/ > AND > in BOTH cases self-archive a supplementary version of your article > in your own institutional repository. > http://www.eprints.org/self-faq/ > http://archives.eprints.org/ > http://openaccess.eprints.org/ > From harnad at ECS.SOTON.AC.UK Sun Jun 3 09:20:34 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 3 Jun 2007 14:20:34 +0100 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: <00b201c7a5da$279c8090$1302a8c0@loet> Message-ID: On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > OK. Let's assume that we need a structural equation model in which journals > are one of the predictive variables. Since one wishes (in the Nature > article) to compare Oxford and Cambridge with Lausanne and Leiden, nation > should be another independent variable. You also wish to take expert > judgement (peer review) as a predictor? > > But what would be the dependent (predicted) variable? In the validation phase of developing the metric equation, one of the external criteria to use is human rankings. That is what we will be doing in our analyses of the UK 2008 metric RAE rankings, and their relation to the parallel panel review rankings. But that is not really "peer review." Peer review is done by journals, and its outcome is acceptance or non-acceptance at that journal's level in the journal quality (hence peer-review) hierarchy. Other ways to validate metrics of course include cross-validating them against other (validated) metrics and criteria. But the objective is to develop weighted sets of metrics that have been validated and can then provide norms and benchmarks, as well as serving as autonomous predictors in their own right. Stevan Harnad > > -----Original Message----- > > From: ASIS&T Special Interest Group on Metrics > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > > Sent: Sunday, June 03, 2007 1:57 PM > > To: SIGMETRICS at listserv.utk.edu > > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > > rankings" (Nature, 31 May) > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > > > > "All current university rankings are flawed to some > > extent; most, > > > > fundamentally," > > > > > > The problem is that institutions are not the right unit of > > analysis for the > > > bibliometric comparison because citation and publication > > practices vary > > > among disciplines and specialties. Universities are mixed bags. > > > > Yes and no. It is correct that the right unit of analysis is > > the field or even > > subfield of the research being compared. But it is also true > > that in comparing > > universities one is also comparing their field and subfield coverage. > > > > The general way to approach this problem is with a rich and > > diverse set of > > predictor metrics, in a joint multiple regression equation > > that can adjust the > > weightings of each depending on the field, and on the use to which the > > spectrum of metrics is being put: There can, for example, be > > "discipline > > coverage" metrics (from narrow to wide) as well as "field size" and > > "institutional size" metrics, whose regression weights can be adjusted > > depending on what it is that the equation is being used to predict, > > and hence to rank. The differential weightings can be > > validated against > > other means of ranking (including expert judgments). > > > > Harnad, S. (2007) Open Access Scientometrics and the UK Research > > Assessment Exercise. Invited Keynote, 11th Annual Meeting of the > > International Society for Scientometrics and Informetrics. Madrid, > > Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 > > > > > Our Leiden colleagues try to correct for this by > > normalizing on the journal > > > set which the group uses itself, but one can also ask > > whether the group is > > > using the best possible set given its research profile. > > Should one not first > > > determine a journal set and then compare groups within it? > > > > The three things that are needed are (1) a far richer and > > more diverse set of > > potential metrics, (2) insurance that like is being compared > > with like, and (3) > > validation of the ranking against face-valid external > > criteria, so that the > > metrics can eventually function as benchmarks and norms. > > > > None of this can be done a priori; the methodology is similar to the > > methodology of validating batteries of psychometric or > > biometric tests: > > Correlate the joint set of metrics with external, face-valid > > criteria, and > > adjust their respective weights accordingly. > > > > It is unlikely, however, that the relevant and predictive frame of > > reference and basis of comparison will be journal sets. > > Breadth/narrowness > > of journal coverage is just one among many, many potential > > parameters. The > > interest is in comparing researchers and research groups or > > institutions, > > within or across fields. The journal does carry some predictive and > > normative power in this, and it is one indirect way of > > equating for field, > > but it is one among many ways that one might wish to weight > > -- or equate > > -- metrics, particularly in an Open Access database in which > > all journals > > (and all individual articles and all individual researchers, and their > > respective download, citation, co-citation, hub/authority, > > consanguinity, > > chronometric, and many other metrics are all available for weighting, > > equating, and validating). > > > > What we have to remember is that the imminent Open Access (OA) world > > is incomparably wider and richer -- and more open -- than the narrow, > > impoverished classical-ISI world to which we were constrained in the > > Closed Access paper-based era. > > > > > Furthermore, Brewer et al. (2001) made the point that one > > should also > > > distinguish between prestige and reputation. Reputation is > > field specific; > > > prestige is more historical. (Brewer, D. J., Gates, S. M., > > & Goldman, C. A. > > > (2001). In Pursuit of Prestige: Strategy and Competition in > > U.S. Higher > > > Education. Piscataway, NJ: Transaction Publishers, Rutgers > > University.) > > > > This is still narrow journal- and journal-average-centred > > thinking. Yes, > > journals will still be the entities in which papers are > > published, and journals > > will vary both in their field of coverage and their quality, > > and this can and > > will be taken into account. But those variables constitute > > only a small fraction > > of OA scientometric and semiometric space. > > > > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open > > Research Web: A Preview of the Optimal and the > > Inevitable, in Jacobs, > > N., Eds. Open Access: Key Strategic, Technical and > > Economic Aspects, > > Chandos. http://eprints.ecs.soton.ac.uk/12453/ > > > > > Many of the evaluating teams are institutionally dependent > > on the contracts > > > for the evaluations. Quis custodies custodes? > > > > OA itself is transparency's, diversity's and equitability's > > best defender. > > > > Stevan Harnad > > AMERICAN SCIENTIST OPEN ACCESS FORUM: > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > Access-Forum.html > > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ > > > > UNIVERSITIES and RESEARCH FUNDERS: > > If you have adopted or plan to adopt an policy of providing > > Open Access > > to your own research article output, please describe your policy at: > > http://www.eprints.org/signup/sign.php > > http://openaccess.eprints.org/index.php?/archives/71-guid.html > > http://openaccess.eprints.org/index.php?/archives/136-guid.html > > > > OPEN-ACCESS-PROVISION POLICY: > > BOAI-1 ("Green"): Publish your article in a suitable > > toll-access journal > > http://romeo.eprints.org/ > > OR > > BOAI-2 ("Gold"): Publish your article in an open-access > > journal if/when > > a suitable one exists. > > http://www.doaj.org/ > > AND > > in BOTH cases self-archive a supplementary version of your article > > in your own institutional repository. > > http://www.eprints.org/self-faq/ > > http://archives.eprints.org/ > > http://openaccess.eprints.org/ > > > From loet at LEYDESDORFF.NET Sun Jun 3 09:35:32 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 3 Jun 2007 15:35:32 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: OK: The validation is the measure. Thus, we would take a number of predictive variables x1, x2, etc. (e.g., journal impact factors, total citations, number of publications, nation (!), etc.) and then fit the outcome to the expert opinions (y) so that: y = a * x1 + b * x2 + c * x3 + ..... Is that the idea of the sophisticated (OA) scientometrics? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > Sent: Sunday, June 03, 2007 3:21 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > rankings" (Nature, 31 May) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > OK. Let's assume that we need a structural equation model > in which journals > > are one of the predictive variables. Since one wishes (in the Nature > > article) to compare Oxford and Cambridge with Lausanne and > Leiden, nation > > should be another independent variable. You also wish to take expert > > judgement (peer review) as a predictor? > > > > But what would be the dependent (predicted) variable? > > In the validation phase of developing the metric equation, one of the > external criteria to use is human rankings. That is what we will be > doing in our analyses of the UK 2008 metric RAE rankings, and their > relation to the parallel panel review rankings. > > But that is not really "peer review." Peer review is done by journals, > and its outcome is acceptance or non-acceptance at that > journal's level > in the journal quality (hence peer-review) hierarchy. > > Other ways to validate metrics of course include cross-validating them > against other (validated) metrics and criteria. > > But the objective is to develop weighted sets of metrics that > have been > validated and can then provide norms and benchmarks, as well > as serving > as autonomous predictors in their own right. > > Stevan Harnad > > > > -----Original Message----- > > > From: ASIS&T Special Interest Group on Metrics > > > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > > > Sent: Sunday, June 03, 2007 1:57 PM > > > To: SIGMETRICS at listserv.utk.edu > > > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > > > rankings" (Nature, 31 May) > > > > > > Adminstrative info for SIGMETRICS (for example unsubscribe): > > > http://web.utk.edu/~gwhitney/sigmetrics.html > > > > > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > > > > > > "All current university rankings are flawed to some > > > extent; most, > > > > > fundamentally," > > > > > > > > The problem is that institutions are not the right unit of > > > analysis for the > > > > bibliometric comparison because citation and publication > > > practices vary > > > > among disciplines and specialties. Universities are mixed bags. > > > > > > Yes and no. It is correct that the right unit of analysis is > > > the field or even > > > subfield of the research being compared. But it is also true > > > that in comparing > > > universities one is also comparing their field and > subfield coverage. > > > > > > The general way to approach this problem is with a rich and > > > diverse set of > > > predictor metrics, in a joint multiple regression equation > > > that can adjust the > > > weightings of each depending on the field, and on the use > to which the > > > spectrum of metrics is being put: There can, for example, be > > > "discipline > > > coverage" metrics (from narrow to wide) as well as "field > size" and > > > "institutional size" metrics, whose regression weights > can be adjusted > > > depending on what it is that the equation is being used > to predict, > > > and hence to rank. The differential weightings can be > > > validated against > > > other means of ranking (including expert judgments). > > > > > > Harnad, S. (2007) Open Access Scientometrics and the > UK Research > > > Assessment Exercise. Invited Keynote, 11th Annual > Meeting of the > > > International Society for Scientometrics and > Informetrics. Madrid, > > > Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 > > > > > > > Our Leiden colleagues try to correct for this by > > > normalizing on the journal > > > > set which the group uses itself, but one can also ask > > > whether the group is > > > > using the best possible set given its research profile. > > > Should one not first > > > > determine a journal set and then compare groups within it? > > > > > > The three things that are needed are (1) a far richer and > > > more diverse set of > > > potential metrics, (2) insurance that like is being compared > > > with like, and (3) > > > validation of the ranking against face-valid external > > > criteria, so that the > > > metrics can eventually function as benchmarks and norms. > > > > > > None of this can be done a priori; the methodology is > similar to the > > > methodology of validating batteries of psychometric or > > > biometric tests: > > > Correlate the joint set of metrics with external, face-valid > > > criteria, and > > > adjust their respective weights accordingly. > > > > > > It is unlikely, however, that the relevant and predictive frame of > > > reference and basis of comparison will be journal sets. > > > Breadth/narrowness > > > of journal coverage is just one among many, many potential > > > parameters. The > > > interest is in comparing researchers and research groups or > > > institutions, > > > within or across fields. The journal does carry some > predictive and > > > normative power in this, and it is one indirect way of > > > equating for field, > > > but it is one among many ways that one might wish to weight > > > -- or equate > > > -- metrics, particularly in an Open Access database in which > > > all journals > > > (and all individual articles and all individual > researchers, and their > > > respective download, citation, co-citation, hub/authority, > > > consanguinity, > > > chronometric, and many other metrics are all available > for weighting, > > > equating, and validating). > > > > > > What we have to remember is that the imminent Open Access > (OA) world > > > is incomparably wider and richer -- and more open -- than > the narrow, > > > impoverished classical-ISI world to which we were > constrained in the > > > Closed Access paper-based era. > > > > > > > Furthermore, Brewer et al. (2001) made the point that one > > > should also > > > > distinguish between prestige and reputation. Reputation is > > > field specific; > > > > prestige is more historical. (Brewer, D. J., Gates, S. M., > > > & Goldman, C. A. > > > > (2001). In Pursuit of Prestige: Strategy and Competition in > > > U.S. Higher > > > > Education. Piscataway, NJ: Transaction Publishers, Rutgers > > > University.) > > > > > > This is still narrow journal- and journal-average-centred > > > thinking. Yes, > > > journals will still be the entities in which papers are > > > published, and journals > > > will vary both in their field of coverage and their quality, > > > and this can and > > > will be taken into account. But those variables constitute > > > only a small fraction > > > of OA scientometric and semiometric space. > > > > > > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. > (2006) The Open > > > Research Web: A Preview of the Optimal and the > > > Inevitable, in Jacobs, > > > N., Eds. Open Access: Key Strategic, Technical and > > > Economic Aspects, > > > Chandos. http://eprints.ecs.soton.ac.uk/12453/ > > > > > > > Many of the evaluating teams are institutionally dependent > > > on the contracts > > > > for the evaluations. Quis custodies custodes? > > > > > > OA itself is transparency's, diversity's and equitability's > > > best defender. > > > > > > Stevan Harnad > > > AMERICAN SCIENTIST OPEN ACCESS FORUM: > > > http://amsci-forum.amsci.org/archives/American-Scientist-Open- > > > Access-Forum.html > > > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ > > > > > > UNIVERSITIES and RESEARCH FUNDERS: > > > If you have adopted or plan to adopt an policy of providing > > > Open Access > > > to your own research article output, please describe your > policy at: > > > http://www.eprints.org/signup/sign.php > > > http://openaccess.eprints.org/index.php?/archives/71-guid.html > > > > http://openaccess.eprints.org/index.php?/archives/136-guid.html > > > > > > OPEN-ACCESS-PROVISION POLICY: > > > BOAI-1 ("Green"): Publish your article in a suitable > > > toll-access journal > > > http://romeo.eprints.org/ > > > OR > > > BOAI-2 ("Gold"): Publish your article in an open-access > > > journal if/when > > > a suitable one exists. > > > http://www.doaj.org/ > > > AND > > > in BOTH cases self-archive a supplementary version of > your article > > > in your own institutional repository. > > > http://www.eprints.org/self-faq/ > > > http://archives.eprints.org/ > > > http://openaccess.eprints.org/ > > > > > > From harnad at ECS.SOTON.AC.UK Sun Jun 3 11:07:13 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 3 Jun 2007 16:07:13 +0100 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: <00c801c7a5e4$0fc78910$1302a8c0@loet> Message-ID: On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > OK: The validation is the measure. No, the validation is something that one does in order to establish the reliability and predictive power of one's measures (metrics). One takes the measure (or measures) one seeks to validate and first tests their internal reliability (by autocorrelation) and then their external validity, by testing their correlation with an external criterion or face-valid measure of what one is trying to measure or predict. If one wished to validate barometric pressure as a predictor of rain, one would first check its reliability (does it give the same value if measured repeatedly?) and then, if is reliable, one checks its validity (how closely does it correlate with subsequent rainfall)? Once its validity is established, one can use pressure to predict subsequent rain. In the case of OA metrics, the idea is not to validate merely one predictor metric, but a weighted battery of diverse metrics, for greater joint predictive power. These not only have to be validated against face-valid human criteria or other validated metrics, but they have to be validated field by field, and application by application (depending on what criteria one is trying to predict and evaluate). And multiple regression determines what percentage of the variance in the criterion each predictor metric accounts for. > Thus, we would take a number of predictive variables x1, x2, etc. (e.g., > journal impact factors, total citations, number of publications, nation (!), > etc.) and then fit the outcome to the expert opinions (y) so that: > > y = a * x1 + b * x2 + c * x3 + ..... > > Is that the idea of the sophisticated (OA) scientometrics? Not quite. "Nation" is not a predictor variable, any more than individual or institution is: they are just potential contexts for comparison, if we were interested in comparing nations (or institutions, or individuals). And OA scientometrics are not sophisticated in the sense of depending on new, sophisticated statistical techniques -- multiple regression, after all, is quite classical -- but in being based on a far larger and richer set of metrics than classical citation analysis, thanks to the OA database. Stevan Harnad From loet at LEYDESDORFF.NET Sun Jun 3 11:24:59 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 3 Jun 2007 17:24:59 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: > And OA scientometrics are not sophisticated in the sense of depending on > new, sophisticated statistical techniques -- multiple regression, after > all, is quite classical -- but in being based on a far larger and richer > set of metrics than classical citation analysis, thanks to the OA database. > > Stevan Harnad > Yes, I agree that multiple regression is a classical technique. But one needs a dependent variable in that case which can be operationalized. Unlike the case of barometric pressure, we don't have an objective measure, but the standard has to be constructed. All the validated measures seem predictors (independent variables) to me when one thinks within the model of multiple regression. What do you propose as the predicted variable? With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated. 385 pp.; US$ 18.95 From harnad at ECS.SOTON.AC.UK Sun Jun 3 13:42:55 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Sun, 3 Jun 2007 18:42:55 +0100 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: <001201c7a5f3$59ec68d0$1302a8c0@loet> Message-ID: On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > Yes, I agree that multiple regression is a classical technique. But one > needs a dependent variable in that case which can be operationalized. Unlike > the case of barometric pressure, we don't have an objective measure, but the > standard has to be constructed. Loet, we are beginning to repeat ourselves. I said that in the case of weather forecasting, the barometric pressure is the independent (predictor) variable and rain is the dependent (predicted) variable. We first validate pressure as a predictor of rain, against rain itself, and then once pressure is shown to correlate highly enough with rain, we plan our picnics based on pressure, without having to wait for them to be rained on. The same is true with scientometrics. We take our battery of independent variables -- the many candidate metrics -- and we do a multiple regression on a criterion, the dependent variable, first to validate them. In the example I gave, the dependent variable is the RAE panel rankings. Once we validate our predictor metrics (by field), we can then give top-sliced research funding (in the UK dual-funding system) without having to waste the time and energies of the panelists. > All the validated measures seem predictors (independent variables) to me > when one thinks within the model of multiple regression. What do you propose > as the predicted variable? See above. Stevan Harnad From loet at LEYDESDORFF.NET Sun Jun 3 15:50:04 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sun, 3 Jun 2007 21:50:04 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: OK: that is a clear answer. The multiple regression serves to explain the RAE ratings. Eventually, you may wish to build an expert system which makes it possible to generate the RAE ratings without the panelists. Thank you for the clarification. (I had not clearly understood this from the previous mailings.) Best wishes, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > Sent: Sunday, June 03, 2007 7:43 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > rankings" (Nature, 31 May) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > Yes, I agree that multiple regression is a classical > technique. But one > > needs a dependent variable in that case which can be > operationalized. Unlike > > the case of barometric pressure, we don't have an objective > measure, but the > > standard has to be constructed. > > Loet, we are beginning to repeat ourselves. I said that in the case > of weather forecasting, the barometric pressure is the independent > (predictor) variable and rain is the dependent (predicted) > variable. We > first validate pressure as a predictor of rain, against rain itself, > and then once pressure is shown to correlate highly enough with rain, > we plan our picnics based on pressure, without having to wait for them > to be rained on. > > The same is true with scientometrics. We take our battery of > independent > variables -- the many candidate metrics -- and we do a > multiple regression on a > criterion, the dependent variable, first to validate them. In > the example I > gave, the dependent variable is the RAE panel rankings. Once > we validate our > predictor metrics (by field), we can then give top-sliced > research funding (in > the UK dual-funding system) without having to waste the time > and energies of > the panelists. > > > All the validated measures seem predictors (independent > variables) to me > > when one thinks within the model of multiple regression. > What do you propose > > as the predicted variable? > > See above. > > Stevan Harnad > From loet at LEYDESDORFF.NET Mon Jun 4 02:09:26 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Mon, 4 Jun 2007 08:09:26 +0200 Subject: "Academics strike back at spurious rankings" (Nature, 31 May) In-Reply-To: Message-ID: PS. 1. Explaining the RAE ratings does not inform us about their validity! It only informs us about how they are constructed and can perhaps be automated. 2. If it works with multiple regression analysis after proper normalization and validation, the validation of the parameters would not be expected to hold for a next round because there is a feedback loop ("learning") involved (in addition to the problem of auto-correlations between the rankings). Best, Loet OK: that is a clear answer. The multiple regression serves to explain the RAE ratings. Eventually, you may wish to build an expert system which makes it possible to generate the RAE ratings without the panelists. Thank you for the clarification. (I had not clearly understood this from the previous mailings.) Best wishes, Loet > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad > Sent: Sunday, June 03, 2007 7:43 PM > To: SIGMETRICS at listserv.utk.edu > Subject: Re: [SIGMETRICS] "Academics strike back at spurious > rankings" (Nature, 31 May) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Sun, 3 Jun 2007, Loet Leydesdorff wrote: > > > Yes, I agree that multiple regression is a classical > technique. But one > > needs a dependent variable in that case which can be > operationalized. Unlike > > the case of barometric pressure, we don't have an objective > measure, but the > > standard has to be constructed. > > Loet, we are beginning to repeat ourselves. I said that in the case > of weather forecasting, the barometric pressure is the independent > (predictor) variable and rain is the dependent (predicted) > variable. We > first validate pressure as a predictor of rain, against rain itself, > and then once pressure is shown to correlate highly enough with rain, > we plan our picnics based on pressure, without having to wait for them > to be rained on. > > The same is true with scientometrics. We take our battery of > independent > variables -- the many candidate metrics -- and we do a > multiple regression on a > criterion, the dependent variable, first to validate them. In > the example I > gave, the dependent variable is the RAE panel rankings. Once > we validate our > predictor metrics (by field), we can then give top-sliced > research funding (in > the UK dual-funding system) without having to waste the time > and energies of > the panelists. > > > All the validated measures seem predictors (independent > variables) to me > > when one thinks within the model of multiple regression. > What do you propose > > as the predicted variable? > > See above. > > Stevan Harnad > From isidro at CINDOC.CSIC.ES Tue Jun 5 06:34:09 2007 From: isidro at CINDOC.CSIC.ES (Isidro F. Aguillo) Date: Tue, 5 Jun 2007 12:34:09 +0200 Subject: "Academics strike back at spurious rankings" (Nature,31 May) In-Reply-To: <00b201c7a5da$279c8090$1302a8c0@loet> Message-ID: Dear all: As our ranking (Webometrics) is listed in the Nature paper, we wish to add some points to the debate: - Rankings has been very successful in certain areas. Original purpose of Chinese Ranking was to help students for choosing foreign universities and from our own data this is still the major use, so there is clearly a gap to be filled. In other area, Shanghai data is probably behind major reorganization of French university system in 2006. As the Web data includes larger number of institutions in developing countries we have noticed similar debates in the Middle East and South East Asia. - Our main aim for preparing the Web ranking was not to classify institutions but to encourage Web publication, even farther than the current OA initiatives as they are focused on the formal scholar publication and we call for open archives of raw data, teaching material, multimedia resources, software and other academic and para-academic material. It was a great surprise to discover that there is already a big academic digital divide in web contents that affects not only to developing regions but to many European countries - Taking into account the naming system of the Web domains, the institution is a "natural" unit in the webometric analysis. An additional advantage is that webpages reflect a lot of more activities that only scientific publications but unfortunately in a way that is difficult to discriminate specific contributions. As the evaluation of universities should consider other aspects than research output, Web indicators could be combined with other indicators as we intend to do in the near future. A new edition of our ranking (http://www.webometrics.info) covering over 4000 universities worldwide is scheduled for the July. Any comments and suggestions are welcomed. Best regards, Loet Leydesdorff escribi?: > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > OK. Let's assume that we need a structural equation model in which journals > are one of the predictive variables. Since one wishes (in the Nature > article) to compare Oxford and Cambridge with Lausanne and Leiden, nation > should be another independent variable. You also wish to take expert > judgement (peer review) as a predictor? > > But what would be the dependent (predicted) variable? > > With best wishes, > > > Loet > > ________________________________ > > Loet Leydesdorff > Amsterdam School of Communications Research (ASCoR), > Kloveniersburgwal 48, 1012 CX Amsterdam. > Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 > loet at leydesdorff.net ; http://www.leydesdorff.net/ > > > > >> -----Original Message----- >> From: ASIS&T Special Interest Group on Metrics >> [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad >> Sent: Sunday, June 03, 2007 1:57 PM >> To: SIGMETRICS at listserv.utk.edu >> Subject: Re: [SIGMETRICS] "Academics strike back at spurious >> rankings" (Nature, 31 May) >> >> Adminstrative info for SIGMETRICS (for example unsubscribe): >> http://web.utk.edu/~gwhitney/sigmetrics.html >> >> On Sun, 3 Jun 2007, Loet Leydesdorff wrote: >> >> >>>> "All current university rankings are flawed to some >>>> >> extent; most, >> >>>> fundamentally," >>>> >>> The problem is that institutions are not the right unit of >>> >> analysis for the >> >>> bibliometric comparison because citation and publication >>> >> practices vary >> >>> among disciplines and specialties. Universities are mixed bags. >>> >> Yes and no. It is correct that the right unit of analysis is >> the field or even >> subfield of the research being compared. But it is also true >> that in comparing >> universities one is also comparing their field and subfield coverage. >> >> The general way to approach this problem is with a rich and >> diverse set of >> predictor metrics, in a joint multiple regression equation >> that can adjust the >> weightings of each depending on the field, and on the use to which the >> spectrum of metrics is being put: There can, for example, be >> "discipline >> coverage" metrics (from narrow to wide) as well as "field size" and >> "institutional size" metrics, whose regression weights can be adjusted >> depending on what it is that the equation is being used to predict, >> and hence to rank. The differential weightings can be >> validated against >> other means of ranking (including expert judgments). >> >> Harnad, S. (2007) Open Access Scientometrics and the UK Research >> Assessment Exercise. Invited Keynote, 11th Annual Meeting of the >> International Society for Scientometrics and Informetrics. Madrid, >> Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 >> >> >>> Our Leiden colleagues try to correct for this by >>> >> normalizing on the journal >> >>> set which the group uses itself, but one can also ask >>> >> whether the group is >> >>> using the best possible set given its research profile. >>> >> Should one not first >> >>> determine a journal set and then compare groups within it? >>> >> The three things that are needed are (1) a far richer and >> more diverse set of >> potential metrics, (2) insurance that like is being compared >> with like, and (3) >> validation of the ranking against face-valid external >> criteria, so that the >> metrics can eventually function as benchmarks and norms. >> >> None of this can be done a priori; the methodology is similar to the >> methodology of validating batteries of psychometric or >> biometric tests: >> Correlate the joint set of metrics with external, face-valid >> criteria, and >> adjust their respective weights accordingly. >> >> It is unlikely, however, that the relevant and predictive frame of >> reference and basis of comparison will be journal sets. >> Breadth/narrowness >> of journal coverage is just one among many, many potential >> parameters. The >> interest is in comparing researchers and research groups or >> institutions, >> within or across fields. The journal does carry some predictive and >> normative power in this, and it is one indirect way of >> equating for field, >> but it is one among many ways that one might wish to weight >> -- or equate >> -- metrics, particularly in an Open Access database in which >> all journals >> (and all individual articles and all individual researchers, and their >> respective download, citation, co-citation, hub/authority, >> consanguinity, >> chronometric, and many other metrics are all available for weighting, >> equating, and validating). >> >> What we have to remember is that the imminent Open Access (OA) world >> is incomparably wider and richer -- and more open -- than the narrow, >> impoverished classical-ISI world to which we were constrained in the >> Closed Access paper-based era. >> >> >>> Furthermore, Brewer et al. (2001) made the point that one >>> >> should also >> >>> distinguish between prestige and reputation. Reputation is >>> >> field specific; >> >>> prestige is more historical. (Brewer, D. J., Gates, S. M., >>> >> & Goldman, C. A. >> >>> (2001). In Pursuit of Prestige: Strategy and Competition in >>> >> U.S. Higher >> >>> Education. Piscataway, NJ: Transaction Publishers, Rutgers >>> >> University.) >> >> This is still narrow journal- and journal-average-centred >> thinking. Yes, >> journals will still be the entities in which papers are >> published, and journals >> will vary both in their field of coverage and their quality, >> and this can and >> will be taken into account. But those variables constitute >> only a small fraction >> of OA scientometric and semiometric space. >> >> Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open >> Research Web: A Preview of the Optimal and the >> Inevitable, in Jacobs, >> N., Eds. Open Access: Key Strategic, Technical and >> Economic Aspects, >> Chandos. http://eprints.ecs.soton.ac.uk/12453/ >> >> >>> Many of the evaluating teams are institutionally dependent >>> >> on the contracts >> >>> for the evaluations. Quis custodies custodes? >>> >> OA itself is transparency's, diversity's and equitability's >> best defender. >> >> Stevan Harnad >> AMERICAN SCIENTIST OPEN ACCESS FORUM: >> http://amsci-forum.amsci.org/archives/American-Scientist-Open- >> Access-Forum.html >> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ >> >> UNIVERSITIES and RESEARCH FUNDERS: >> If you have adopted or plan to adopt an policy of providing >> Open Access >> to your own research article output, please describe your policy at: >> http://www.eprints.org/signup/sign.php >> http://openaccess.eprints.org/index.php?/archives/71-guid.html >> http://openaccess.eprints.org/index.php?/archives/136-guid.html >> >> OPEN-ACCESS-PROVISION POLICY: >> BOAI-1 ("Green"): Publish your article in a suitable >> toll-access journal >> http://romeo.eprints.org/ >> OR >> BOAI-2 ("Gold"): Publish your article in an open-access >> journal if/when >> a suitable one exists. >> http://www.doaj.org/ >> AND >> in BOTH cases self-archive a supplementary version of your article >> in your own institutional repository. >> http://www.eprints.org/self-faq/ >> http://archives.eprints.org/ >> http://openaccess.eprints.org/ >> >> > > __________ Informaci?n de NOD32, revisi?n 2308 (20070604) __________ > > Este mensaje ha sido analizado con NOD32 antivirus system > http://www.nod32.com > > > > -- *************************************** Isidro F. Aguillo isidro @ cindoc.csic.es Ph:(+34) 91-5635482 ext. 313 Cybermetrics Lab CINDOC-CSIC Joaquin Costa, 22 28002 Madrid. SPAIN www.webometrics.info www.cindoc.csic.es/cybermetrics internetlab.cindoc.csic.es **************************************** From harnad at ECS.SOTON.AC.UK Tue Jun 5 08:29:27 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Tue, 5 Jun 2007 13:29:27 +0100 Subject: "Academics strike back at spurious rankings" (Nature,31 May) In-Reply-To: <46653C21.8080408@cindoc.csic.es> Message-ID: The message below is from Isidro Aguillo, the Scientific Director of the Laboratory of Quantitative Studies of the Internet http://internetlab.cindoc.csic.es/miembros.asp?id=1 of the Centre for Scientific Information and Documentation http://www.cindoc.csic.es/eng/info/infobjetivos.html Spanish National Research Council http://www.csic.es/index.do and editor of Cybermetrics, the International Journal of Scientometrics, Informetrics and Bibliometrics http://www.cindoc.csic.es/cybermetrics/ Dr. Aguillo makes the very valid point (in response to Declan Butler's Nature news article about the use of unvalidated university rankings) http://www.nature.com/nature/journal/v447/n7144/full/447514b.html that web metrics provide new and potentially useful information not available elsewhere. This is certainly true, and web metrics should certainly be among the metrics that are included in the multiple regression equation that should be tested and validated in order to weight each of the candidate component metrics and to develop norms and benchmarks for reliable widespread use in ranking and evaluation. Among other potential useful sources of candidate metrics are: University Metrics: http://www.universitymetrics.com/tiki-index.php Harzing's Google-Scholar-based metrics: http://www.harzing.com/pop.htm Citebase: http://citebase.eprints.org/ Citeseer: http://citeseer.ist.psu.edu/ and of course Google Scholar itself: http://scholar.google.com/advanced_scholar_search?hl=en&lr= Stevan Harnad On Tue, 5 Jun 2007, Isidro F. Aguillo wrote: > Dear all: > > As our ranking (Webometrics) is listed in the Nature paper, we wish to > add some points to the debate: > > - Rankings has been very successful in certain areas. Original purpose > of Chinese Ranking was to help students for choosing foreign > universities and from our own data this is still the major use, so there > is clearly a gap to be filled. In other area, Shanghai data is > probably behind major reorganization of French university system in > 2006. As the Web data includes larger number of institutions in > developing countries we have noticed similar debates in the Middle East > and South East Asia. > > - Our main aim for preparing the Web ranking was not to classify > institutions but to encourage Web publication, even farther than the > current OA initiatives as they are focused on the formal scholar > publication and we call for open archives of raw data, teaching > material, multimedia resources, software and other academic and > para-academic material. It was a great surprise to discover that there > is already a big academic digital divide in web contents that affects > not only to developing regions but to many European countries > > - Taking into account the naming system of the Web domains, the > institution is a "natural" unit in the webometric analysis. An > additional advantage is that webpages reflect a lot of more activities > that only scientific publications but unfortunately in a way that is > difficult to discriminate specific contributions. As the evaluation of > universities should consider other aspects than research output, Web > indicators could be combined with other indicators as we intend to do > in the near future. > > A new edition of our ranking (http://www.webometrics.info) covering over > 4000 universities worldwide is scheduled for the July. Any comments and > suggestions are welcomed. > > Best regards, > *************************************** > Isidro F. Aguillo > isidro -- cindoc.csic.es > Ph:(+34) 91-5635482 ext. 313 > > Cybermetrics Lab > CINDOC-CSIC > Joaquin Costa, 22 > 28002 Madrid. SPAIN > > www.webometrics.info > www.cindoc.csic.es/cybermetrics > internetlab.cindoc.csic.es > **************************************** > > From: Stevan Harnad > To: AMERICAN-SCIENTIST-OPEN-ACCESS-FORUM at LISTSERVER.SIGMAXI.ORG > Subject: "Academics strike back at spurious rankings" (Nature, 31 May) http://openaccess.eprints.org/index.php?/archives/251-guid.html > Academics strike back at spurious rankings > D Butler, Nature 447, 514-515 (31 May 2007) doi:10.1038/447514b > http://www.nature.com/nature/journal/v447/n7144/full/447514b.html > > This news item in Nature lists some of the (very valid) objections to the > many unvalidated university rankings -- both subjective and objective -- > that are in wide use today. > > These problems are all the more reason for extending Open Access (OA) > and developing OA scientometrics, which will provide open, validatable > and calibratable metrics for research, researchers, and institutions in > each field -- a far richer, more sensitive, and more equitable spectrum > of metrics than the few, weak and unvalidated measures available today. > > Some research groups that are doing relevant work on this are, in the UK: > (1) our own OA scientometrics group at Southampton (and UQaM, Canada), > and our collaborators Charles Oppenheim (Loughborough) and Arthur Sale > (Tasmania); (2) Mike Thelwall (Wolverhampton); in the US: (3) Johan > Bollen & Herbert van de Sompel at LANL; and in the Netherlands: (5) > Henk Moed & Anton van Raan (Leiden; cited in the Nature news item). > > Below are excerpts from the Nature article, followed by some references. > > Universities seek reform of ratings. > http://www.nature.com/nature/journal/v447/n7144/full/447514b.html > > [A] group of US colleges [called for a] boycott [of] the most > influential university ranking in the United States... Experts argue > that these are based on dubious methodology and spurious data, > yet they have huge influence... > > "All current university rankings are flawed to some extent; most, > fundamentally," > > The rankings in the U.S. News & World Report and those published by > the British Times Higher Education Supplement (THES) depend heavily > on surveys of thousands of experts - a system that some contest. A > third popular ranking, by Jiao Tong University in Shanghai, China, > is based on more quantitative measures, such as citations, numbers > of Nobel prizewinners and publications in Nature and Science. But > even these measures are not straightforward. > > Thomson Scientific's ISI citation data are notoriously poor for > use in rankings; names of institutions are spelled differently from > one article to the next, and university affiliations are sometimes > omitted altogether. After cleaning up ISI data on all UK papers for > such effects... the true number of papers from the University of > Oxford, for example, [were] 40% higher than listed by ISI... > > Researchers at Leiden University in the Netherlands have similarly > recompiled the ISI database for 400 universities: half a million > papers per year. Their system produces various rankings based on > different indicators. One, for example, weights citations on the > basis of their scientific field, so that a university that does well > in a heavily cited field doesn't get an artificial extra boost. > > The German Center for Higher Education Development (CHE) also offers > rankings... for almost 300 German, Austrian and Swiss universities... > the CHE is expanding the system to cover all Europe. > > The US Commission on the Future of Higher Education is considering > creating a similar public database, which would offer competition > to the U.S. News & World Report. > > --------------------------------------------------------------------------- > > Bollen, Johan and Herbert Van de Sompel. Mapping the structure of > science through usage. Scientometrics, 69(2), 2006 > http://dx.doi.org/10.1007/s11192-006-0151-8 > > Hardy, R., Oppenheim, C., Brody, T. and Hitchcock, S. (2005) Open > Access Citation Information. > http://eprints.ecs.soton.ac.uk/11536/ > > Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) > Mandated online RAE CVs Linked to University Eprint > Archives: Improving the UK Research Assessment Exercise > whilst making it cheaper and easier. Ariadne 35. > http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm > > Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open > Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, > N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, > chapter 21. Chandos. http://eprints.ecs.soton.ac.uk/12453/ > > Harnad, S. (2007) Open Access Scientometrics and the UK Research > Assessment Exercise. Invited Keynote, 11th Annual Meeting of the > International Society for Scientometrics and Informetrics. Madrid, > Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 > > Kousha, Kayvan and Thelwall, Mike (2006) Google Scholar Citations and > Google Web/URL Citations: A Multi-Discipline Exploratory Analysis. > In Proceedings International Workshop on Webometrics, Informetrics > and Scientometrics & Seventh COLLNET Meeting, Nancy (France). > http://eprints.rclis.org/archive/00006416/ > > Moed, H.F. (2005). Citation Analysis in Research Evaluation. > Dordrecht (Netherlands): Springer. > > van Raan, A. (2007) Bibliometric statistical properties of the 100 > largest European universities: prevalent scaling rules in the science > system. Journal of the American Society for Information Science and > Technology http://www.cwts.nl/Cwts/Stat4AX-JASIST.pdf > > Stevan Harnad > AMERICAN SCIENTIST OPEN ACCESS FORUM: > http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ > To join or leave the Forum or change your subscription address: > http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html > > UNIVERSITIES: If you have adopted or plan to adopt an institutional > policy of providing Open Access to your own research article output, > please describe your policy at: > http://www.eprints.org/signup/sign.php > > UNIFIED DUAL OPEN-ACCESS-PROVISION POLICY: > BOAI-1 ("green"): Publish your article in a suitable toll-access journal > http://romeo.eprints.org/ > OR > BOAI-2 ("gold"): Publish your article in a open-access journal if/when > a suitable one exists. > http://www.doaj.org/ > AND > in BOTH cases self-archive a supplementary version of your article > in your institutional repository. > http://www.eprints.org/self-faq/ > http://archives.eprints.org/ > http://openaccess.eprints.org/ > From harnad at ECS.SOTON.AC.UK Tue Jun 5 10:32:03 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Tue, 5 Jun 2007 15:32:03 +0100 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) Message-ID: ---------- Forwarded message ---------- Date: Tue, 5 Jun 2007 14:29:59 +0100 (BST) From: Fionn Murtagh Subject: Re: British Classification Soc post-RAE talk/discussion - 6 July British Classification Society Meeting "Analysis Methodologies for Post-RAE Scientometrics", and AGM Friday 6 July 2007, International Building room IN244 Royal Holloway, University of London, Egham The selection of appropriate and/or best data analysis methodologies are a result of a number of issues: the overriding goals of course, but also the availability of well formatted, and ease of access to such, data. The meeting will focus on the early stages of the analysis pipeline. An aim of this meeting is to discuss data analysis methodologies in the context of what can be considered as open, objective and universal in a metrics context of scholarly and applied research. Les Carr and Tim Brody (Intelligence, Agents, Media group, Electronics and Computer Science, University of Southampton): "Open Access Scientometrics and the UK Research Assessment Exercise" There will be a number of short presentations also, preceding the discussion, including: Pedro Contreras, RHUL: "Indexing, storage, querying in a distributed document system" Mireille Summa, Paris-Dauphine: "Editing and clustering matrices of time series" Fionn Murtagh, RHUL: "The data selection, measurement and analysis chain: the role of correspondence analysis in measurement and scaling" Boris Mirkin, Birkbeck: "How the ACM classification can be used for profiling research organisations" (joint work with S. Nascimento and L. Moniz Pereira) The AGM of the British Classification Society will follow directly. The day's meeting will start at 10am, and finish at 4.30pm. Registrations are necessary (there is no registration fee) to Janet Hales at j.hales at cs.rhul.ac.uk Further information from Fionn Murtagh at fionn at cs.rhul.ac.uk Web address: http://thames.cs.rhul.ac.uk/bcs From van at EMSE.FR Tue Jun 5 11:56:52 2007 From: van at EMSE.FR (T Van) Date: Tue, 5 Jun 2007 17:56:52 +0200 Subject: UT and RecID in ISI Web of Science In-Reply-To: <000001c7a66e$e856bb40$1302a8c0@loet> Message-ID: Hi everyone, I have a question about ISI Web of Science. In ISI WoS, each article is represented by an identifier called UT, however there's also another identifier called RecID. What are differences between them? (I only know that RecID is could be used to identify cited references of a given article). Are they unique (i.e. with an article there is only 1 UT and/or 1 RecID)? Thank you in advance, T. Van From loet at LEYDESDORFF.NET Tue Jun 5 14:46:05 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Tue, 5 Jun 2007 20:46:05 +0200 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) In-Reply-To: Message-ID: "Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, etc. can all be tested and validated jointly, discipline by discipline, against their RAE panel rankings in the forthcoming parallel panel-based and metric RAE in 2008. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings." Dear Steven, I took this from: Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. In Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics (in press), Madrid, Spain; at http://eprints.ecs.soton.ac.uk/13804/ It is very clear now: Your aim is to explain the RAE ranking (as the dependent variable). I remain puzzled why one could wish to do so. One can expect Type I and Type II errors in these rankings; I would expect both of the order of 30% (given the literature). If you would be able to reproduce ("calibrate") these rankings using multi-variate regression, you would also reproduce the error terms. With best wishes, Loet _______________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad > Sent: Tuesday, June 05, 2007 4:32 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] British Classification Soc post-RAE > talk/discussion - 6 July (fwd) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > ---------- Forwarded message ---------- > Date: Tue, 5 Jun 2007 14:29:59 +0100 (BST) > From: Fionn Murtagh > Subject: Re: British Classification Soc post-RAE > talk/discussion - 6 July > > British Classification Society Meeting > "Analysis Methodologies for Post-RAE Scientometrics", and AGM > Friday 6 July 2007, International Building room IN244 > Royal Holloway, University of London, Egham > > The selection of appropriate and/or best data analysis methodologies > are a result of a number of issues: the overriding goals of course, > but also the availability of well formatted, and ease of access to > such, data. The meeting will focus on the early stages of the > analysis > pipeline. An aim of this meeting is to discuss data analysis > methodologies in the context of what can be considered as open, > objective and universal in a metrics context of scholarly and applied > research. > > Les Carr and Tim Brody (Intelligence, Agents, Media group, > Electronics and Computer Science, University of Southampton): > "Open Access Scientometrics and the UK Research > Assessment Exercise" > > There will be a number of short presentations also, preceding the > discussion, including: > > Pedro Contreras, RHUL: > "Indexing, storage, querying in a distributed document system" > Mireille Summa, Paris-Dauphine: > "Editing and clustering matrices of time series" > Fionn Murtagh, RHUL: > "The data selection, measurement and analysis chain: the role of > correspondence analysis in measurement and scaling" > Boris Mirkin, Birkbeck: > "How the ACM classification can be used for profiling research > organisations" (joint work with S. Nascimento and L. > Moniz Pereira) > > The AGM of the British Classification Society will follow directly. > The day's meeting will start at 10am, and finish at 4.30pm. > > Registrations are necessary (there is no registration fee) to Janet > Hales at j.hales at cs.rhul.ac.uk > > Further information from Fionn Murtagh at fionn at cs.rhul.ac.uk > > Web address: http://thames.cs.rhul.ac.uk/bcs > > From garfield at CODEX.CIS.UPENN.EDU Tue Jun 5 16:43:09 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Tue, 5 Jun 2007 16:43:09 -0400 Subject: Citrome L "Impact Factor? Shmimpact Factor! The Journal Impact Factor, Modern Day Literature Searching, and the Publication Process" Psychiatry 2007;4(5):54-57 Message-ID: TITLE : Impact Factor? Shmimpact Factor! The Journal Impact Factor, Modern Day Literature Searching, and the Publication Process AUTHOR : Leslie Citrome, MD, MPH AUTHOR AFFILIATION: Dr. Citrome is Professor of Psychiatry, New York University School of Medicine, and Director, Clinical Research and Evaluation Facility, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York SOURCE : Psychiatry 2007;4(5):54-57 Abstract The journal impact factor is a measure of the citability of articles published in that journal?the more citations generated, the more important that article is considered to be, and as a consequence the prestige of the journal is enhanced. The impact factor is not without controversy, and it can be manipulated. It no longer dominates the choices of journals to search for information. Online search engines, such as PubMed, can locate articles of interest in seconds across journals regardless of high or low impact factors. Editors desiring to increase their influence will need to focus on a fast and friendly submission and review process, early online and speedy print publication, and encourage the rapid turnaround of high- quality peer reviews. Authors desiring to have their results known to the world have never had it so good?the internet permits anyone with computer access to find the author?s work. Key Words: journal impact factor, peer review, publication, PubMed, searching Psychiatry 2007;4(5):54-57 From linda.butler at ANU.EDU.AU Tue Jun 5 21:03:26 2007 From: linda.butler at ANU.EDU.AU (Linda Butler) Date: Wed, 6 Jun 2007 11:03:26 +1000 Subject: UT and RecID in ISI Web of Science In-Reply-To: <466587C4.2050406@emse.fr> Message-ID: Hi Ton The UT is a unique identifier for an article. It is present in the raw data files we obtain from Thomson. I don't know the RecID, so can't answer that part of your query. Linda At 01:56 AM 6/06/2007, you wrote: >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > >Hi everyone, > >I have a question about ISI Web of Science. In ISI WoS, each >article is represented by an identifier called UT, however there's >also another identifier called RecID. What are differences between >them? (I only know that RecID is could be used to identify cited >references of a given article). Are they unique (i.e. with an >article there is only 1 UT and/or 1 RecID)? > >Thank you in advance, >T. Van Linda Butler Research Evaluation and Policy Project Research School of Social Sciences Building 9, H C Coombs Bld The Australian National University ACT 0200 Australia Tel: 61 2 61252154 Fax: 61 2 61259767 http://repp.anu.edu.au From enrique.wulff at ICMAN.CSIC.ES Wed Jun 6 07:37:03 2007 From: enrique.wulff at ICMAN.CSIC.ES (=?iso-8859-1?Q?=22Enrique_Wulff_=28C=E1diz=2E_CSIC=29=22?=) Date: Wed, 6 Jun 2007 13:37:03 +0200 Subject: "Academics strike back at spurious rankings" (Nature,31 May) Message-ID: Good morning all At the moment the institution as the best choice as an evaluation unit seems an option compromised with this RAE-associated pedagogical innovation action. A sincere effort was developed in the past to fit individual criteria by Jan Vlachy. And preference has been assigned to the department as an unavoidable priority eg. by the andalusian administration (http://www.ucua.es). Did it start a polemical debate on the technical or rather political reasons behind the institutional-oriented decision? I do not know. Have you got some bibliography? I expect your opinion. Enrique. http://www.ucm.es/BUCM/revistas/inf/02104210/articulos/DCIN9595110245A.PDF At 12:34 05/06/2007, you wrote: >Adminstrative info for SIGMETRICS (for example unsubscribe): >http://web.utk.edu/~gwhitney/sigmetrics.html > >Dear all: > >As our ranking (Webometrics) is listed in the >Nature paper, we wish to add some points to the debate: > >- Rankings has been very successful in certain >areas. Original purpose of Chinese Ranking was >to help students for choosing foreign >universities and from our own data this is still >the major use, so there is clearly a gap to be >filled. In other area, Shanghai data is >probably behind major reorganization of French >university system in 2006. As the Web data >includes larger number of institutions in >developing countries we have noticed similar >debates in the Middle East and South East Asia. > >- Our main aim for preparing the Web ranking was >not to classify institutions but to encourage >Web publication, even farther than the current >OA initiatives as they are focused on the formal >scholar publication and we call for open >archives of raw data, teaching material, >multimedia resources, software and other >academic and para-academic material. It was a >great surprise to discover that there is already >a big academic digital divide in web contents >that affects not only to developing regions but to many European countries > >- Taking into account the naming system of the >Web domains, the institution is a "natural" unit >in the webometric analysis. An additional >advantage is that webpages reflect a lot of more >activities that only scientific publications but >unfortunately in a way that is difficult to >discriminate specific contributions. As the >evaluation of universities should consider other >aspects than research output, Web indicators >could be combined with other indicators as we intend to do in the near future. > >A new edition of our ranking >(http://www.webometrics.info) covering over 4000 >universities worldwide is scheduled for the >July. Any comments and suggestions are welcomed. > >Best regards, > > > >Loet Leydesdorff escribi?: >>Adminstrative info for SIGMETRICS (for example unsubscribe): >>http://web.utk.edu/~gwhitney/sigmetrics.html >> >>OK. Let's assume that we need a structural equation model in which journals >>are one of the predictive variables. Since one wishes (in the Nature >>article) to compare Oxford and Cambridge with Lausanne and Leiden, nation >>should be another independent variable. You also wish to take expert >>judgement (peer review) as a predictor? >>But what would be the dependent (predicted) variable? >>With best wishes, >> >>Loet >> >>________________________________ >> >>Loet Leydesdorff Amsterdam School of >>Communications Research (ASCoR), >>Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: >>+31-20- 525 6598; fax: +31-20- 525 3681 >>loet at leydesdorff.net ; http://www.leydesdorff.net/ >> >> >> >>>-----Original Message----- >>>From: ASIS&T Special Interest Group on Metrics >>>[mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Stevan Harnad >>>Sent: Sunday, June 03, 2007 1:57 PM >>>To: SIGMETRICS at listserv.utk.edu >>>Subject: Re: [SIGMETRICS] "Academics strike >>>back at spurious rankings" (Nature, 31 May) >>> >>>Adminstrative info for SIGMETRICS (for example unsubscribe): >>>http://web.utk.edu/~gwhitney/sigmetrics.html >>> >>>On Sun, 3 Jun 2007, Loet Leydesdorff wrote: >>> >>> >>>>> "All current university rankings are flawed to some >>>extent; most, >>> >>>>> fundamentally," >>>>> >>>>The problem is that institutions are not the right unit of >>>analysis for the >>> >>>>bibliometric comparison because citation and publication >>>practices vary >>> >>>>among disciplines and specialties. Universities are mixed bags. >>>> >>>Yes and no. It is correct that the right unit >>>of analysis is the field or even >>>subfield of the research being compared. But >>>it is also true that in comparing >>>universities one is also comparing their field and subfield coverage. >>>The general way to approach this problem is with a rich and diverse set of >>>predictor metrics, in a joint multiple >>>regression equation that can adjust the >>>weightings of each depending on the field, and on the use to which the >>>spectrum of metrics is being put: There can, for example, be "discipline >>>coverage" metrics (from narrow to wide) as well as "field size" and >>>"institutional size" metrics, whose regression weights can be adjusted >>>depending on what it is that the equation is being used to predict, >>>and hence to rank. The differential weightings can be validated against >>>other means of ranking (including expert judgments). >>> >>> Harnad, S. (2007) Open Access Scientometrics and the UK Research >>> Assessment Exercise. Invited Keynote, 11th Annual Meeting of the >>> International Society for Scientometrics and Informetrics. Madrid, >>> Spain, 25 June 2007 http://arxiv.org/abs/cs.IR/0703131 >>> >>> >>>>Our Leiden colleagues try to correct for this by >>>normalizing on the journal >>> >>>>set which the group uses itself, but one can also ask >>>whether the group is >>> >>>>using the best possible set given its research profile. >>>Should one not first >>> >>>>determine a journal set and then compare groups within it? >>>The three things that are needed are (1) a far >>>richer and more diverse set of potential >>>metrics, (2) insurance that like is being compared with like, and (3) >>>validation of the ranking against face-valid external criteria, so that the >>>metrics can eventually function as benchmarks and norms. >>> >>>None of this can be done a priori; the methodology is similar to the >>>methodology of validating batteries of psychometric or biometric tests: >>>Correlate the joint set of metrics with >>>external, face-valid criteria, and adjust their respective weights accordingly. >>> >>>It is unlikely, however, that the relevant and predictive frame of >>>reference and basis of comparison will be journal sets. Breadth/narrowness >>>of journal coverage is just one among many, many potential parameters. The >>>interest is in comparing researchers and research groups or institutions, >>>within or across fields. The journal does carry some predictive and >>>normative power in this, and it is one indirect way of equating for field, >>>but it is one among many ways that one might wish to weight -- or equate >>>-- metrics, particularly in an Open Access database in which all journals >>>(and all individual articles and all individual researchers, and their >>>respective download, citation, co-citation, hub/authority, consanguinity, >>>chronometric, and many other metrics are all available for weighting, >>>equating, and validating). >>> >>>What we have to remember is that the imminent Open Access (OA) world >>>is incomparably wider and richer -- and more open -- than the narrow, >>>impoverished classical-ISI world to which we were constrained in the >>>Closed Access paper-based era. >>> >>> >>>>Furthermore, Brewer et al. (2001) made the point that one >>>should also >>> >>>>distinguish between prestige and reputation. Reputation is >>>field specific; >>> >>>>prestige is more historical. (Brewer, D. J., Gates, S. M., >>>& Goldman, C. A. >>> >>>>(2001). In Pursuit of Prestige: Strategy and Competition in >>>U.S. Higher >>> >>>>Education. Piscataway, NJ: Transaction Publishers, Rutgers >>>University.) >>> >>>This is still narrow journal- and journal-average-centred thinking. Yes, >>>journals will still be the entities in which >>>papers are published, and journals >>>will vary both in their field of coverage and >>>their quality, and this can and >>>will be taken into account. But those >>>variables constitute only a small fraction >>>of OA scientometric and semiometric space. >>> >>> Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open >>> Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, >>> N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, >>> Chandos. http://eprints.ecs.soton.ac.uk/12453/ >>> >>> >>>>Many of the evaluating teams are institutionally dependent >>>on the contracts >>> >>>>for the evaluations. Quis custodies custodes? >>>OA itself is transparency's, diversity's and equitability's best defender. >>> >>>Stevan Harnad >>>AMERICAN SCIENTIST OPEN ACCESS FORUM: >>>http://amsci-forum.amsci.org/archives/American-Scientist-Open- >>>Access-Forum.html >>> http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/ >>> >>>UNIVERSITIES and RESEARCH FUNDERS: >>>If you have adopted or plan to adopt an policy of providing Open Access >>>to your own research article output, please describe your policy at: >>> http://www.eprints.org/signup/sign.php >>> http://openaccess.eprints.org/index.php?/archives/71-guid.html >>> http://openaccess.eprints.org/index.php?/archives/136-guid.html >>> >>>OPEN-ACCESS-PROVISION POLICY: >>> BOAI-1 ("Green"): Publish your article in >>> a suitable toll-access journal >>> http://romeo.eprints.org/ >>>OR >>> BOAI-2 ("Gold"): Publish your article in >>> an open-access journal if/when a suitable one exists. >>> http://www.doaj.org/ >>>AND >>> in BOTH cases self-archive a supplementary version of your article >>> in your own institutional repository. >>> http://www.eprints.org/self-faq/ >>> http://archives.eprints.org/ >>> http://openaccess.eprints.org/ >>> >>> >> >>__________ Informaci?n de NOD32, revisi?n 2308 (20070604) __________ >> >>Este mensaje ha sido analizado con NOD32 antivirus system >>http://www.nod32.com >> >> >> >> > > >-- >*************************************** >Isidro F. Aguillo >isidro @ cindoc.csic.es >Ph:(+34) 91-5635482 ext. 313 > >Cybermetrics Lab >CINDOC-CSIC >Joaquin Costa, 22 >28002 Madrid. SPAIN > >www.webometrics.info >www.cindoc.csic.es/cybermetrics >internetlab.cindoc.csic.es >**************************************** From harnad at ECS.SOTON.AC.UK Wed Jun 6 13:07:23 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Wed, 6 Jun 2007 18:07:23 +0100 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) In-Reply-To: <005f01c7a7a1$c6d52860$1302a8c0@loet> Message-ID: On Tue, 5 Jun 2007, Loet Leydesdorff wrote: >> SH: >> "Publications, journal impact factors, citations, co-citations, citation >> chronometrics (age, growth, latency to peak, decay rate), hub/authority >> scores, h-index, prior funding, student counts, co-authorship scores, >> endogamy/exogamy, textual proximity, download/co-downloads and their >> chronometrics, etc. can all be tested and validated jointly, discipline by >> discipline, against their RAE panel rankings in the forthcoming parallel >> panel-based and metric RAE in 2008. The weights of each predictor can be >> calibrated to maximize the joint correlation with the rankings." > > Dear Steven, > > I took this from: > Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment > Exercise. In Proceedings of 11th Annual Meeting of the International Society > for Scientometrics and Informetrics (in press), Madrid, Spain; at > http://eprints.ecs.soton.ac.uk/13804/ > > It is very clear now: Your aim is to explain the RAE ranking (as the > dependent variable). I remain puzzled why one could wish to do so. One can > expect Type I and Type II errors in these rankings; I would expect both of > the order of 30% (given the literature). If you would be able to reproduce > ("calibrate") these rankings using multi-variate regression, you would also > reproduce the error terms. Dear Loet, You are quite right that the RAE panel rankings are themselves merely predictive measures, not face-valid criteria, and will hence have errors, noise and bias to varying degrees. But the RAE panel rankings are the only thing the RAE outcome has been based on for nearly two decades now! The objective is first to replace the expensive and time-consuming panel reviews with metrics that give roughly the same rankings. Then we can work on making the metrics even more valid and predictive. First things first: If the panel rankings have been good enough for the RAE, then metrics that give the same outcome should be at least good enough too. Being far less costly and labor-intensive and far more transparent, they are vastly to be preferred (with a much reduced panel role in validity checking and calibration). Then we can work on optimizing them. Stevan PS Of course there are additional ways of validating metrics, apart from the RAE; moreover, only the UK has the RAE. But that also makes the UK an ideal test-bed for prima facie validation of the metrics, systematically, across fields and institutions. From loet at LEYDESDORFF.NET Wed Jun 6 14:23:30 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Wed, 6 Jun 2007 20:23:30 +0200 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) In-Reply-To: Message-ID: I look forward to your multi-variate regression model for explaining the RAE rankings. Best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Stevan Harnad > Sent: Wednesday, June 06, 2007 7:07 PM > To: SIGMETRICS at LISTSERV.UTK.EDU > Subject: Re: [SIGMETRICS] British Classification Soc post-RAE > talk/discussion - 6 July (fwd) > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > On Tue, 5 Jun 2007, Loet Leydesdorff wrote: > > >> SH: > >> "Publications, journal impact factors, citations, > co-citations, citation > >> chronometrics (age, growth, latency to peak, decay rate), > hub/authority > >> scores, h-index, prior funding, student counts, > co-authorship scores, > >> endogamy/exogamy, textual proximity, download/co-downloads > and their > >> chronometrics, etc. can all be tested and validated > jointly, discipline by > >> discipline, against their RAE panel rankings in the > forthcoming parallel > >> panel-based and metric RAE in 2008. The weights of each > predictor can be > >> calibrated to maximize the joint correlation with the rankings." > > > > Dear Steven, > > > > I took this from: > > Harnad, S. (2007) Open Access Scientometrics and the UK > Research Assessment > > Exercise. In Proceedings of 11th Annual Meeting of the > International Society > > for Scientometrics and Informetrics (in press), Madrid, Spain; at > > http://eprints.ecs.soton.ac.uk/13804/ > > > > It is very clear now: Your aim is to explain the RAE ranking (as the > > dependent variable). I remain puzzled why one could wish to > do so. One can > > expect Type I and Type II errors in these rankings; I would > expect both of > > the order of 30% (given the literature). If you would be > able to reproduce > > ("calibrate") these rankings using multi-variate > regression, you would also > > reproduce the error terms. > > Dear Loet, > > You are quite right that the RAE panel rankings are themselves merely > predictive measures, not face-valid criteria, and will hence have > errors, noise and bias to varying degrees. > > But the RAE panel rankings are the only thing the RAE outcome has been > based on for nearly two decades now! The objective is first to replace > the expensive and time-consuming panel reviews with metrics that give > roughly the same rankings. Then we can work on making the metrics even > more valid and predictive. > > First things first: If the panel rankings have been good enough for > the RAE, then metrics that give the same outcome should be at least > good enough too. Being far less costly and labor-intensive > and far more > transparent, they are vastly to be preferred (with a much > reduced panel > role in validity checking and calibration). > > Then we can work on optimizing them. > > Stevan > > PS Of course there are additional ways of validating metrics, > apart from > the RAE; moreover, only the UK has the RAE. But that also makes the UK > an ideal test-bed for prima facie validation of the metrics, > systematically, across fields and institutions. > From harnad at ECS.SOTON.AC.UK Wed Jun 6 17:30:40 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Wed, 6 Jun 2007 22:30:40 +0100 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) In-Reply-To: <006101c7a867$cba478c0$1302a8c0@loet> Message-ID: On Wed, 6 Jun 2007, Loet Leydesdorff wrote: > I look forward to your multi-variate regression model for explaining the RAE > rankings. (1) It's not explaining, it's predicting -- like weather-forecasting. (2) It's not a model. (Multiple regression is just the bog-standard general linear model for statistics.) (3) The idea is first to find metrics that are closely enough correlated with the RAE panel rankings to be confidently substituted for them. (4) Then the idea is to make them better, more powerful, more predictive, field by field, adjusting the regression weights on each metric as needed. (5) Not slavishly predictive of the RAE panel rankings any more (that would be circular), but predictive of future research performance, other (validated) metrics, other human rankings. Stevan Harnad From loet at LEYDESDORFF.NET Thu Jun 7 00:50:17 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Thu, 7 Jun 2007 06:50:17 +0200 Subject: British Classification Soc post-RAE talk/discussion - 6 July (fwd) In-Reply-To: Message-ID: > (1) It's not explaining, it's predicting -- like weather-forecasting. Multi-variate regression is based on a static model. (Otherwise, there is auto-correlation in the data and also in the error terms.) You may wish to develop a more dynamic conceptualization. In the case of a system, the Markov assumption is often a good one: the best prediction of a system at t+1 is its state at t. The research system is more stable than the weather. (I don't know how stable the RAE has been, but it seems to me that the British system is rather stable.) > (2) It's not a model. (Multiple regression is just the bog-standard > general linear model for statistics.) > > (3) The idea is first to find metrics that are closely enough > correlated > with the RAE panel rankings to be confidently substituted for them. > > (4) Then the idea is to make them better, more powerful, more > predictive, field by field, adjusting the regression weights on each > metric as needed. Yes: this is why I suggested in a previous email to develop a structural equation model (LISREL). LISREL is also static. If you wish to extend the multi-variate regression with a dynamic analysis, entropy statistics would be my prime candidate ("The Challenge of Scientometrics", Leiden: DSWO/Leiden University Press, 1995). > (5) Not slavishly predictive of the RAE panel rankings any more (that > would be circular), but predictive of future research > performance, other > (validated) metrics, other human rankings. It would be more interesting to take "research performance" as the explanandum. But I had understood that you wished to use it as a predictor for the ranking in the RAE ("first things first!"). For example, one would expect research performance to be confounded with other factors (e.g., sexism and nepotism; Wenneras, C., & Wold, A. (1997). Sexism and Nepotism in Peer-Review. Nature, 387, 341-343) in a peer-based ranking exercise like the RAE. > Stevan Harnad > With best wishes, Loet From garfield at CODEX.CIS.UPENN.EDU Fri Jun 8 19:52:53 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 8 Jun 2007 19:52:53 -0400 Subject: Scanes CG "Poultry science: Celebrating its impact factor, impact, and quality " POULTRY SCIENCE 86 (1): 1-1 JAN 2007 Message-ID: Title: Poultry science: Celebrating its impact factor, impact, and quality Author(s): Scanes CG (Scanes, Colin G.) Source: POULTRY SCIENCE 86 (1): 1-1 JAN 2007 Document Type: Editorial Material Language: English Cited References: 3 Times Cited: 0 Publisher: POULTRY SCIENCE ASSOC INC, 1111 NORTH DUNLAP AVE, SAVOY, IL 61874-9604 USA Subject Category: Agriculture, Dairy & Animal Science IDS Number: 124WN ISSN: 0032-5791 CITED REFERENCES : HOEFFEL C Journal impact factors ALLERGY 53 : 1225 1998 OPTHOF T Sense and nonsense about the impact factor CARDIOVASCULAR RESEARCH 33 : 1 1997 VINKLER P Characterization of the impact of sets of scientific papers: The Garfield (Impact) Factor JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY 55 : 431 2004 From garfield at CODEX.CIS.UPENN.EDU Fri Jun 8 20:04:06 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 8 Jun 2007 20:04:06 -0400 Subject: Van Fleet DD "Increasing the value of teaching in the academic marketplace: The creation of a peer-review infrastructure for teaching " ACADEMY OF MANAGEMENT LEARNING & EDUCATION 4 (4): 506-514 DEC 2005 Message-ID: DD Van Fleet : ddvf at asu.edu Title: Increasing the value of teaching in the academic marketplace: The creation of a peer-review infrastructure for teaching Author(s): Van Fleet DD (Van Fleet, David D.), Peterson TO (Peterson, Tim O.) Source: ACADEMY OF MANAGEMENT LEARNING & EDUCATION 4 (4): 506-514 DEC 2005 Document Type: Article Language: English Cited References: 39 Times Cited: 0 Abstract: Despite moves in the academic marketplace to broaden the definition of scholarship to include teaching and learning along with research, the value of teaching continues to be perceived as relatively low and not readily transferable from institution to institution. Our purpose here is to try to draw attention to that differential bargaining power so that perhaps some tentative first steps could be taken to remedy it. One idea for establishing appropriate value of teaching in the academic marketplace is to devise and use an infrastructure similar to that which exists for research. We provide a draft outlining one possible parallel infrastructure. EXCERPT FROM PAPER : "In the research marketplace high relative performance is indicated in vrious ways - acceptance and rejection rates, he use of external reviewers impact factors, or published rankings. As most editors would explain, however, journal rankings should not be the sole basis of quality judgments because "lower level" journals may sometimes contain articles of tremendous quality and impact, and "top" journals may sometimes contain flawed or less significant articles (Starbuch, 2003). Those involved in faculty personnel decisions should never be allowed to avoid reading the works of those who are being reviewed (Van Fleet, McWilliams, & Siegel, 2000)." KeyWords Plus: RESEARCH PRODUCTIVITY; MANAGEMENT; TENURE; PUBLICATION; JOURNALS Addresses: Van Fleet DD (reprint author), Arizona State Univ, Tempe, AZ 85287 USA Arizona State Univ, Tempe, AZ 85287 USA Texas A&M Univ, College Stn, TX 77843 USA Publisher: ACAD MANAGEMENT, PACE UNIV, PO BOX 3020, 235 ELM RD, BRIARCLIFF MANOR, NY 10510-8020 USA Subject Category: Education & Educational Research; Management IDS Number: 095FT ISSN: 1537-260X CITED REFERENCES : ALBERS C Using the syllabus to document the scholarship of teaching TEACHING SOCIOLOGY 31 : 60 2003 ASTIN AW AM COLL TEACHER NATL : 1997 BARTUNEK JM ACAD MANAGEMENT LEAR 1 : 7 2002 BEDEIAN AG ACAD MANAGEMENT LEAR 3 : 198 2004 BORDONS M Advantages and limitations in the use of impact factor measures for the assessment of research performance in a peripheral country SCIENTOMETRICS 53 : 195 2002 BOYER EL SCHOLARSHIP RECONSID : 1990 CADWALLADER ML REFLECTIONS ON ACADEMIC-FREEDOM AND TENURE LIBERAL EDUCATION 69 : 1 1983 DANOS P ENEWSLINE 3 : 2004 FOGG P CHRONICLE HIGHE 0618 : A10 2004 GARFIELD E The multiple meanings of impact factors JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 49 : 768 1998 GOODRICK E J MANAGE 28 : 649 2000 HOFSTADTER R DEV ACAD FREEDOM US : 1955 JENKINS A RESHAPING TEACHING H : 2003 KASTEN KL TENURE AND MERIT PAY AS REWARDS FOR RESEARCH, TEACHING, AND SERVICE AT A RESEARCH UNIVERSITY JOURNAL OF HIGHER EDUCATION 55 : 500 1984 KERR S MANUSCRIPT CHARACTERISTICS WHICH INFLUENCE ACCEPTANCE FOR MANAGEMENT AND SOCIAL-SCIENCE JOURNALS ACADEMY OF MANAGEMENT JOURNAL 20 : 132 1977 KOSTOFF RN The use and misuse of citation analysis in research evaluation - Comments on theories of citation? SCIENTOMETRICS 43 : 27 1998 LINDGREN J CHI KENT L REV 73 : 823 1998 LONG RG Research productivity of graduates in management: Effects of academic origin and academic affiliation ACADEMY OF MANAGEMENT JOURNAL 41 : 704 1998 MAGDOLA MBB CREATING CONTEXTS LE : 1999 MARTINKO MJ Bias in the social science publication process: Are there exceptions? JOURNAL OF SOCIAL BEHAVIOR AND PERSONALITY 15 : 1 2000 MCCALLUM LW A META-ANALYSIS OF COURSE-EVALUATION DATA AND ITS USE IN THE TENURE DECISION RESEARCH IN HIGHER EDUCATION 21 : 150 1984 MERTON RK SOCIOL SCI : 460 1973 METZGER WP FACULTY TENURE : 1973 MILLER SN IMPROVING COLLEGE U 32 : 87 1984 MOONEY CJ CHRONICLE HIGHE 0325 : A1 1992 MOONEY CJ CHRONICLE HIGHE 0325 : A14 1992 MOONEY CJ CHRONICLE HIGHE 0325 : A16 1992 PFEFFER J SOC FORCES 55 : 93 1977 POCKLINGTON T NO PLACE LEARN WHY U : 2002 SCHULTZ JJ ISSUES ACCOUNTING ED 4 : 109 1989 SCHWARTZ BN J ACCOUNTING ED 15 : 531 1997 STACK S Research productivity and student evaluation of teaching in social science classes: A research note RESEARCH IN HIGHER EDUCATION 44 : 539 2003 STAHL MJ PUBLICATION IN LEADING MANAGEMENT JOURNALS AS A MEASURE OF INSTITUTIONAL RESEARCH PRODUCTIVITY ACADEMY OF MANAGEMENT JOURNAL 31 : 707 1988 STARBUCK WH UNPUB MUCH BETTER AR : 2003 STRENSTROM RC J GEOLOGICAL ED 39 : 4 1991 VANFLEET DD A theoretical and empirical analysis of journal rankings: The case of formal lists JOURNAL OF MANAGEMENT 26 : 839 2000 VANFLEET DD J MANAGEMENT ED 18 : 77 1994 VESILIND PA SO YOU WANT PROFESSO : 2000 WEINBACK RW IMPROVING COLL U TEA 32 : 81 1984 From garfield at CODEX.CIS.UPENN.EDU Fri Jun 8 20:13:42 2007 From: garfield at CODEX.CIS.UPENN.EDU (=?windows-1252?Q?Eugene_Garfield?=) Date: Fri, 8 Jun 2007 20:13:42 -0400 Subject: Zavos C , Kountouras J , Katsinelos P "Impact factors: looking beyond the absolute figures and journal rankings " GASTROINTESTINAL ENDOSCOPY 64 (6): 1034-1034 DEC 2006 Message-ID: FULL TEXT AVAILABLE AT : http://www.giejournal.org/article/PIIS001651070602671X/fulltext Cristos Zavos : czavos at auth.gr Title: Impact factors: looking beyond the absolute figures and journal rankings Author(s): Zavos C (Zavos, Christos), Kountouras J (Kountouras, Jannis), Katsinelos P (Katsinelos, Panagiotis) Source: GASTROINTESTINAL ENDOSCOPY 64 (6): 1034-1034 DEC 2006 Document Type: Letter Language: English Cited References: 0 Times Cited: 0 Addresses: Zavos C (reprint author), Aristotle Univ Thessaloniki, Ippokrat Hosp, Med Clin, Dept Gastroenterol, Thessaloniki, Greece Aristotle Univ Thessaloniki, Ippokrat Hosp, Med Clin, Dept Gastroenterol, Thessaloniki, Greece Publisher: MOSBY-ELSEVIER, 360 PARK AVENUE SOUTH, NEW YORK, NY 10010-1710 USA Subject Category: Gastroenterology & Hepatology IDS Number: 116AM ISSN: 0016-5107 From abasulists at YAHOO.CO.IN Mon Jun 11 06:14:13 2007 From: abasulists at YAHOO.CO.IN (aparna basu) Date: Mon, 11 Jun 2007 11:14:13 +0100 Subject: Apartment for share at ISSI, Madrid Message-ID: Dear ISSI participants, My husband and I have rented an apartment in Serrano for a week from 23-29 June, as no hotel rooms were available on the ISSI website earlier this week. The apartment holds 4 people. Anyone who would like to share this accomodation is welcome to contact me ( aparnabasu.dr at gmail.com) The cost will be about 200 Eur. per head for the days mentioned. With regards, Aparna Basu aparnabasu.dr at gmail.com Tel: +91 9818405134 (Apologies for cross-posting) --------------------------------- Looking for people who are YOUR TYPE? Find them here! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.bornmann at GESS.ETHZ.CH Thu Jun 14 03:44:18 2007 From: lutz.bornmann at GESS.ETHZ.CH (Bornmann Lutz) Date: Thu, 14 Jun 2007 09:44:18 +0200 Subject: h index Message-ID: Dear colleagues, you might be interested in the first review of studies on the h index: Bornmann, L. & Daniel, H.-D. (2007). What do we know about the h index? Journal of the American Society for Information Science and Technology, 58(9), 1381-1385 Abstract: Jorge Hirsch (2005a, 2005b) recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index. Anyone wanting a copy of our publication, please let me know. Lutz Bornmann ----------------------------------------------------------------------------- Dr. Lutz Bornmann ETH Zurich, D-GESS Professorship for Social Psychology and Research on Higher Education Zaehringerstr. 24 / ZAE CH-8092 Zurich Phone: 0041 44 632 48 25 Fax: 0041 44 632 12 83 http://www.psh.ethz.ch/index_EN bornmann at gess.ethz.ch Download of publications: www.lutz-bornmann.de/Publications.htm From loet at LEYDESDORFF.NET Sat Jun 16 05:54:24 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sat, 16 Jun 2007 11:54:24 +0200 Subject: embellish Pajek pictures Message-ID: Dear colleagues, I added a sixth lesson to the set of science & technology indicators at http://www.leydesdorff.net/indicators entitled "Embellish Pajek pictures". It provides instructions about how to export the files as scalable graphics vector (svg) files which can be used both within the html as further processed using Adobe Illustrator. With best wishes, Loet _____ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR) Kloveniersburgwal 48, 1012 CX Amsterdam Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated. 385 pp.; US$ 18.95 The Self-Organization of the Knowledge-Based Society; The Challenge of Scientometrics -------------- next part -------------- An HTML attachment was scrubbed... URL: From abasulists at YAHOO.CO.IN Sat Jun 16 15:35:29 2007 From: abasulists at YAHOO.CO.IN (aparna basu) Date: Sat, 16 Jun 2007 20:35:29 +0100 Subject: Apartment available at ISSI Madrid Message-ID: An apartment is available in Serrano in Madrid during the ISSI 2007 conference 23-29th June, 2007. There is one bedroom and living room, kitchen and bath and can house 4 people. The cost is 570Euro+ 200 Euro(deposit). Please contact Aparna Basu (aparnabasu.dr at gmail.com) for details. Aparna +91 9818405134 --------------------------------- Looking for people who are YOUR TYPE? Find them here! -------------- next part -------------- An HTML attachment was scrubbed... URL: From blar at DB.DK Mon Jun 18 11:14:41 2007 From: blar at DB.DK (Birger Larsen) Date: Mon, 18 Jun 2007 17:14:41 +0200 Subject: CFP: 12th Nordic Workshop on Bibliometrics and Research Policy Message-ID: *** Apologies for cross postings *** Call for Presentations 12th Nordic Workshop on Bibliometrics and Research Policy 13-14 September 2007 Royal School of Library and Information Science Birketinget 6, DK-2300 Copenhagen S, Denmark Bibliometric researchers in the Nordic countries have arranged annual Nordic workshops on bibliometrics since 1996: 1996 in Helsinki 1997 in Stockholm 1998 in Oslo 1999 in Copenhagen 2000 in Oulu 2001 in Stockholm 2002 in Oslo 2003 in Aalborg 2004 in Turku 2005 in Stockholm 2006 in Oslo The general scope of the workshop is to present recent bibliometric research in the region and to create better linkages between the bibliometric research groups and their PhD students. Note however, that the workshop language is English and the workshop is open to participants from any nation. CALL FOR PRESENTATIONS The 12th Nordic Workshop on Bibliometrics and Research Policy will be held in Copenhagen, 13-14 September 2007. The workshop format is interactive and informal: All participants are requested to make a presentation of a research paper or a research idea, but no paper need to be submitted. Please register by email to Birger Larsen (see http://www.db.dk/blar for the email) and also submit a max 200 word abstract on what you will present as soon as possible and no later than August 1st 2007 if you wish to participate. This year's Keynote Speaker is Dr. Gunnar Sivertsen from NIFU/STEP in Oslo (http://www.nifustep.no/content/view/full/447). He will talk on 'Publication patterns in complete bibliographic data (all scientific journals and books) at all Norwegian universities.' Note that there are no fees for participating in the Nordic workshops on bibliometrics. However, travel and accommodation have to be arranged by the participants themselves. IMPORTANT DATES Deadline for registration and abstract submission: August 1st, 2007. Workshop: September 13-14, 2007. Please visit the workshop website at http://www.db.dk/nbw2007 or contact the organisers for additional information. WORKSHOP ORGANISERS Birger Larsen, Lennart Bj?rneborn and Peter Ingwersen Royal School of Library and Information Science, Denmark _____________________________________________________ Birger Larsen, PhD Associate Professor Department of Information Studies Royal School of Library and Information Science Birketinget 6, DK-2300 Copenhagen S, Denmark Tel. +45 3258 6066 / +45 32341520, Fax. +45 32840201 Email: blar at db.dk, Homepage: http://www.db.dk/blar - Co-organisor of the INEX interactive track (http://inex.is.informatik.uni-duisburg.de/2006/) From Jessica.Shepherd at GUARDIAN.CO.UK Mon Jun 18 14:04:56 2007 From: Jessica.Shepherd at GUARDIAN.CO.UK (Jessica Shepherd) Date: Mon, 18 Jun 2007 19:04:56 +0100 Subject: Jessica Shepherd/Guardian/GNL is out of the office. Message-ID: I will be out of the office starting 18/06/2007 and will not return until 25/06/2007. I will be in Australia from the evening of Wednesday May 16th until the afternoon of May 26th. I will be checking my emails, but may not be able to reply swiftly. In an emergency, please contact Sharon Bainbridge on 020 7239 9943. ------------------------------------------------------------------ The Guardian Public Services Awards 2007, in partnership with Hays Public Services, recognise and reward outstanding performance from public, private and voluntary sector teams. To find out more and nominate a deserving team or individual, visit http://society.guardian.co.uk/publicservicesawards ------------------------------------------------------------------ Visit Guardian Unlimited - the UK's most popular newspaper website http://guardian.co.uk http://observer.co.uk ------------------------------------------------------------------ The Newspaper Marketing Agency Opening Up Newspapers http://www.nmauk.co.uk ------------------------------------------------------------------ This e-mail and all attachments are confidential and may also be privileged. If you are not the named recipient, please notify the sender and delete the e-mail and all attachments immediately. Do not disclose the contents to another person. You may not use the information for any purpose, or store, or copy, it in any way. Guardian News & Media Limited is not liable for any computer viruses or other material transmitted with or as part of this e-mail. You should employ virus checking software. Guardian News & Media Limited A member of Guardian Media Group PLC Registered Office Number 1 Scott Place, Manchester M3 3GG Registered in England Number 908396 From harnad at ECS.SOTON.AC.UK Tue Jun 19 19:03:31 2007 From: harnad at ECS.SOTON.AC.UK (Stevan Harnad) Date: Wed, 20 Jun 2007 00:03:31 +0100 Subject: Australian Opposition metrics In-Reply-To: <5ptc53$3sarqp@ozemail-mail.icp-qv1-irony14.iinet.net.au> Message-ID: Pertinent Prior AmSci Topic Threads: "Future UK RAEs to be Metrics-Based" http://users.ecs.soton.ac.uk/harnad/Hypermail/Amsci/5250.html "Australia's RQF" http://users.ecs.soton.ac.uk/harnad/Hypermail/Amsci/5805.html "Academics strike back at spurious rankings" (Nature, 31 May) http://users.ecs.soton.ac.uk/harnad/Hypermail/Amsci/6452.html On Tue, 19 Jun 2007, Arthur Sale wrote: > The Australian Opposition has announced that should they win office in a > federal election to be held later this year, they will scrap the > Government's planned RQF process (based on peer panels and the UK's RAE), > and replace it by a metric-based quality assessment. > > "The RQF process is cumbersome, costly and threatens to become > incredibly time-consuming. It is neither an efficient nor a > transparent way to allocate valuable research dollars to universities. > "Labor will work hand in hand with researchers, and their > institutions, to develop a research quality assurance framework > that is world's best practice. It will be metrics based. It will > be transparent. It will take due account of differences between > disciplines and discipline groups so that measures are fair, and > funding can flow equitably." Expensive, time-consuming panel-reviews of research performance should certainly be phased out in favour of metrics, customised field by field. So an important question is: Against what will Australia's metrics be validated? The UK RAE is conducting one last *parallel* panel/metric exercise, in which various combinations of metrics can be systematically compared to and validated against panel rankings, field by field. Will the UK's RAE 2008 parallel exercise be Australia's testbed for metrics too? That might not be a bad idea. Though with plans for Australia's RQF already quite advanced, it might be even better to do a parallel panel/metric validation exercise in Australia too, to replicate and cross-validate the UK outcomes (perhaps in collaboration or coordination with the UK). (I certainly don't mean that panel rankings are the face-valid arbiters of research performance quality or impact! But they do have a history, at least in the UK, and so they provide a starting reference point. Human judgment will also be needed to tweak the metric weights to make sure they generate sensible rankings.) Unbiassed Open Access Metrics for the Research Assessment Exercise http://openaccess.eprints.org/index.php?/archives/175-guid.html Harnad, S. (2007) Open Access Scientometrics and the UK Research Assessment Exercise. To appear in: Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics, 25-27 June 2007, Madrid, Spain. http://eprints.ecs.soton.ac.uk/13804/ Stevan Harnad From Chaomei.Chen at CIS.DREXEL.EDU Tue Jun 19 20:33:45 2007 From: Chaomei.Chen at CIS.DREXEL.EDU (Chaomei Chen) Date: Tue, 19 Jun 2007 20:33:45 -0400 Subject: Chaomei Chen/Drexel_IST is out of the office. Message-ID: I will be out of the office starting Tue 06/19/2007 and will not return until Fri 06/29/2007. I will be out of the office from 6/19/2007 till 6/29/2007. I will respond to your message as soon as I can. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugene.garfield at THOMSON.COM Thu Jun 21 02:11:18 2007 From: eugene.garfield at THOMSON.COM (Eugene Garfield) Date: Thu, 21 Jun 2007 02:11:18 -0400 Subject: Impact factor, H index, peer comparisons, and Retrovirology Message-ID: Editorial HYPERLINK "http://www.biomedcentral.com/info/about/openaccess/" HYPERLINK "http://creativecommons.org/licenses/by/2.0". Impact factor, H index, peer comparisons, and Retrovirology: is it time to individualize citation metrics? Kuan-Teh Jeang HYPERLINK "http://www.retrovirology.com/registration/technical.asp?process=default&msg=ce" National Institutes of Health, Bethesda, MD, USA Retrovirology 2007, 4:42 doi:10.1186/1742-4690-4-42 The electronic version of this article is the complete one and can be found online at: HYPERLINK "http://www.retrovirology.com/content/4/1/42"http://www.retrovirology.com/content/4/1/42 ? 2007 Jeang; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (HYPERLINK "http://creativecommons.org/licenses/by/2.0"http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. _____ Abstract There is a natural tendency to judge a gift by the attractiveness of its wrapping. In some respect, this reflects current mores of measuring the gravitas of a scientific paper based on the journal cover in which the work appears. Most journals have an impact factor (IF) which some proudly display on their face page. Although historically journal IF has been a convenient quantitative shorthand, has its (mis)use contributed to inaccurate perceptions of the quality of scientific articles? Is now the time that equally convenient but more individually accurate metrics be adopted? Outline HYPERLINK "http://www.retrovirology.com/content/4/1/42#IDAQUY3HB"Abstract HYPERLINK "http://www.retrovirology.com/content/4/1/42#IDAVUY3HB" HYPERLINK "http://www.retrovirology.com/content/4/1/42#IDAVYWOC"Acknowledgements HYPERLINK "http://www.retrovirology.com/content/4/1/42#references"References Figures HYPERLINK "http://www.retrovirology.com/content/4/1/42/figure/F1" Figure 1 A comparison of Retrovirology's calculated 2006 impact factor with selected journals that publish retrovirus research papers Tables HYPERLINK "http://www.retrovirology.com/content/4/1/42/table/T1" Table 1 Citation frequency and H index for selected Retrovirology Editorial Board members (data collated on June 11, 2007 from Scopus). I surmise that a common question posed to an editor of a new journal is "What is your impact factor?" Based on my experience, in the majority of instances as the conversation evolves, it becomes evident that the questioner misunderstands what impact factor means. IF is a useful number. However, its limitations must be clearly recognized. Given the pervasive (if not obsessive) interest in IF, Retrovirology, as a new journal entering its fourth year of publication, has necessarily mined the citation databases and calculated IF numbers for 2005 (2.98) and 2006 (4.32) [HYPERLINK "http://www.retrovirology.com/content/4/1/42#B1"1]. After having captured those numbers, it is perhaps instructive to consider some factual denotations and frequently misinterpreted connotations of IF. Indeed, as science and medicine march to a more personalized approach, one might further ask if it is time to embrace highly accessible technology in order to complement/supplant generic IF with individually precise citation metrics? When responding, please attach my original message __________________________________________________ Eugene Garfield, PhD. email: garfield at codex.cis.upenn.edu home page: www.eugenegarfield.org Tel: 215-243-2205 Fax 215-387-1266 President, The Scientist LLC. www.the-scientist.com 400 Market St., Suite 1250 Phila. PA 19106- Chairman Emeritus, ISI www.isinet.com 3501 Market Street, Philadelphia, PA 19104-3302 Past President, American Society for Information Science and Technology (ASIS&T) www.asis.org No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.9.0/852 - Release Date: 6/17/2007 8:23 AM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openaccess-large.gif Type: image/gif Size: 1378 bytes Desc: openaccess-large.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: email-ca.gif Type: image/gif Size: 574 bytes Desc: email-ca.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1742-4690-4-42-1.gif Type: image/gif Size: 2203 bytes Desc: 1742-4690-4-42-1.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1742-4690-4-42-T1.gif Type: image/gif Size: 668 bytes Desc: 1742-4690-4-42-T1.gif URL: From p.vandenbesselaar at RATHENAU.NL Fri Jun 22 01:51:34 2007 From: p.vandenbesselaar at RATHENAU.NL (Peter van den Besselaar) Date: Fri, 22 Jun 2007 07:51:34 +0200 Subject: open positions Message-ID: At the Rathenau Instituut, department of science system assessment, several research positions are open at the junior and postdoc level. For information visit the website: http://www.rathenau.nl/showpage.asp?steID=1&ID=2962 best regards, Peter van den Besselaar _____________________________ Prof. dr Peter van den Besselaar Head of department Science System Assessment Rathenau Institituut P.O. Box 95366 2509 CJ Den Haag +31(0)70 342 1542 & Amsterdam School of Communications Research ASCoR Universiteit van Amsterdam http://home.medewerker.uva.nl/p.a.a.vandenbesselaar/ From krobin at JHMI.EDU Thu Jun 28 14:49:38 2007 From: krobin at JHMI.EDU (Karen Robinson) Date: Thu, 28 Jun 2007 14:49:38 -0400 Subject: seeking guidance Message-ID: I am writing in the hopes that I am being silly in not seeing a solution and/or the hope that one of you will have already identified a solution. In brief: I have a set of citations downloaded from ISI based on a specific search.I have been trying different tasks with the programmes created by Dr. Leydesdorff (thank you for making these available!). What I'd like to do is determine the citation patterns of the cited references of the original search set. specifically, given search set A, do cited refs of A cite each other? I am wondering in what way I could automate, if possible, the search for and downloading of cited refs of the cited refs. Another complicating factor is that the cited refs do not have unique ID numbers and may be duplicated with my search set. Hints or suggestions for other people/sources to check with are greatly appreciated. Thanks, Karen -- Karen A. Robinson Internal Medicine and Health Sciences Informatics, Medicine Johns Hopkins University 1830 East Monument Street, Room 8069 Baltimore, MD 21287 410-502-9216 (voice) 410-955-0825 (fax) krobin at jhmi.edu From loet at LEYDESDORFF.NET Fri Jun 29 18:47:34 2007 From: loet at LEYDESDORFF.NET (Loet Leydesdorff) Date: Sat, 30 Jun 2007 00:47:34 +0200 Subject: seeking guidance In-Reply-To: <468402C2.1080904@jhmi.edu> Message-ID: Dear Karen, I am afraid that you have to organize the download manually. The ISI interface does not allow programming and downloading is limited to 500 at a time. Some people manage to work with macros which capture the movements of the mouse on the screen, but I found this more work than doing it manually. With best wishes, Loet ________________________________ Loet Leydesdorff Amsterdam School of Communications Research (ASCoR), Kloveniersburgwal 48, 1012 CX Amsterdam. Tel.: +31-20- 525 6598; fax: +31-20- 525 3681 loet at leydesdorff.net ; http://www.leydesdorff.net/ > -----Original Message----- > From: ASIS&T Special Interest Group on Metrics > [mailto:SIGMETRICS at listserv.utk.edu] On Behalf Of Karen Robinson > Sent: Thursday, June 28, 2007 8:50 PM > To: SIGMETRICS at listserv.utk.edu > Subject: [SIGMETRICS] seeking guidance > > Adminstrative info for SIGMETRICS (for example unsubscribe): > http://web.utk.edu/~gwhitney/sigmetrics.html > > I am writing in the hopes that I am being silly in not seeing > a solution > and/or the hope that one of you will have already identified > a solution. > > In brief: > > I have a set of citations downloaded from ISI based on a specific > search.I have been trying different tasks with the programmes > created by > Dr. Leydesdorff (thank you for making these available!). What > I'd like > to do is determine the citation patterns of the cited > references of the > original search set. > > specifically, given search set A, do cited refs of A cite each other? > > I am wondering in what way I could automate, if possible, the > search for > and downloading of cited refs of the cited refs. Another complicating > factor is that the cited refs do not have unique ID numbers > and may be > duplicated with my search set. > > Hints or suggestions for other people/sources to check with > are greatly > appreciated. > > Thanks, > Karen > -- > Karen A. Robinson > > Internal Medicine and Health Sciences Informatics, Medicine > Johns Hopkins University > > 1830 East Monument Street, Room 8069 > Baltimore, MD 21287 > 410-502-9216 (voice) > 410-955-0825 (fax) > krobin at jhmi.edu > From johannes.stegmann at ONLINEHOME.DE Sat Jun 30 14:58:46 2007 From: johannes.stegmann at ONLINEHOME.DE (Johannes Stegmann) Date: Sat, 30 Jun 2007 20:58:46 +0200 Subject: seeking guidance Message-ID: Dear Karen, You are definitely not silly, and I would not dare to tag such an attribute to Thomson-ISI which is a commercial enterprise, but I am certainly not the only one who thinks they could catch up, at least a little bit, with PubMed which allows to download the entire retrieval set even if it contains more than hundred thousand records (>100,000). However, PubMed records lack the cited references. But with the Web of Science (WoS) you have no chance to automate the download of the full records corresponding to the cited references. You should cooperate with an institution which owns the CD-ROM versions. As ar as I could see your Welch library holds at least some cumulations of the CD-versions. So, as always, first ask your librarian ... Anyway, a small part of your research question might be solved by Gene Garfields Histcite program which (inter alia) provides you with the LCS, the LOCAL CITATION SCORE which "is the number of times a paper is cited by other papers in the local collection" (http://www.histcite.com/). I am sure, Gene will supply you with a free version ( the commercial version will be released in July, as I learned a few moments ago), but he certainly still enjoys the days in good Old Europe following the ISSI conference in Madrid. Best whishes and regards, Johannes Dr. Johannes Stegmann Berlin, Germany email: johannes.stegmann at onlinehome.de homepage: http://www.johannesstegmann.de/