The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference 2014

Moed, Henk (ELS-AMS) H.Moed at ELSEVIER.COM
Thu Sep 18 03:54:42 EDT 2014


Publishing a manifesto on the use of metrics in research assessment is an excellent idea, as it summarizes for a large audience of users and producers of metrics and for all those who are subjected to assessment, base principles of appropriate use, based on experiences and insights collected and published during the past decades. But the responsibility of the metrics research community goes beyond publishing manifesti.

A principal task of our community, which is rather open - as many inventors of new indicators do not attend our specialized conferences -, is developing valid and useful metrics and research assessment methodologies. The strong critique of the DORA Initiative on a particular type of use of journal impact factors, one of the motivations of the proposed metrics manifesto, makes us once more aware of the need to develop more appropriate metric-based tools for assessing research performance especially though not exclusively at the level of individuals and groups, and contribute to their embedment into a broader concept of multi-dimensional assessment.

I wish to defend the position that the metrics research community should take the lead in developing - designing, testing, evaluating - such tools, including transparent features of data verification, flexible benchmarking and self-assessment.  This development should take place independently of the big data providers, funders and politicians, applying rigorous quality standards, in close interaction with the scientific-scholarly community.  Acquiring sufficient funding to carry out this task is a major challenge.

Also, we should play an initiating role in making such tools easily accessible (not necessarily for free) for the research community as a whole.  Easy accessibility is also a necessary condition for establishing valid, fair and broadly accepted standards in metrics-based assessment.  Our relationship with the Big Data Industries may need to be re-invented, and possibly grow towards a covenant between international organizations of universities, funding councils and big data providers. These issues deserve as much attention as a manifesto on metrics use in research assessment.

Henk F. Moed
Visiting professor, Sapienza University of Rome, Italy.
Former professor of research assessment methodologies at Leiden University, and former senior scientific advisor at Elsevier.



From: ASIS&T Special Interest Group on Metrics [mailto:SIGMETRICS at LISTSERV.UTK.EDU] On Behalf Of Rijcke, S. de
Sent: maandag 15 september 2014 16:11
To: SIGMETRICS at LISTSERV.UTK.EDU
Subject: [SIGMETRICS] The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference 2014

Summary
A set of guiding principles (a manifesto) on the use of quantitative metrics in research assessment was proposed by Diana Hicks (Georgia Tech) during a panel session on quality standards for S&T indicators at the STI conference in Leiden last week. Various participants in the debate agreed on the responsibility of the scientometric community in better supporting use of scientometrics. Finding the choice of specific indicators too constraining, many voices supported the idea of a joint publication of a set of principles which should guide a responsible use of quantitative metrics. The session also included calls for scientometricians to take a more proactive role as engaged and responsible stakeholders in the development and monitoring of metrics for research assessment, as well as in wider debates on data governance of, such as infrastructure and ownership.
In the closure of the conference, the association of scientometric institutes ENID (European Network of Indicators Designers) and Ton van Raan as president, offered to play a coordinating role in writing up and publishing a consensus version of the manifesto.
Full report of the plenary session at the 2014 STI conference in Leiden on Quality standards for evaluation: Any chance of a dream come true?
Ismael Rafols (Ingenio (CSIC-UPV) & SPRU (Sussex); Session chair)
Sarah de Rijcke (CWTS, Leiden University)
Paul Wouters (CWTS, Leiden University)
The need to debate these issues has come to the forefront in light of reports that uses of certain easy-to-use and potentially misleading metrics for evaluative purposes have become a routine part of academic life, despite misgivings within the profession itself about its validity. A central aim of the special session was to discuss the need for a concerted response from the scientometric community to produce more explicit guidelines and expert advice on good scientometric practices. The session continued from the 2013 ISSI and STI conferences in Vienna and Berlin, where full plenary sessions were convened on the need for standards in evaluative bibliometrics, and the ethical and policy implications of individual-level bibliometrics.
This year's plenary session started with a summary by Ludo Waltman (CWTS) of the pre-conference workshop on technical aspects of advanced bibliometric indicators. The workshop, co-organised by Ludo, was attended by some 25 participants, and topics that were addressed included 1. Advanced bibliometric indicators (strengths and weaknesses of different types of indicators; field normalization; country-level and institutional-level comparisons); 2. Statistical inference in bibliometric analysis; and 3. Journal impact metrics (strenghts and weaknessess of different journal impact metrics; use of the metrics in the assessment of individual researchers). The workshop discussions were very fruitful and some common ground was found, but that there also remained significant differences in opinion. Some topics that need further discussion are technical and mathematical properties of indicators (e.g., ranking consistency); strong correlations between indicators; the need to distinguish between technical issues and usage issues; purely descriptive approaches vs. statistical approaches, and the importance of user perspectives for technical aspects of indicator production. There was a clear interest in continuing these discussions at a next conference. The slides of the workshop are available on request.
Ludo's summary was followed by a short talk by Sarah de Rijcke (CWTS), to set the scene for the ensuing panel discussion. Sarah provided an historical explanation for why previous responses by the scientometric community about misuses of performance metrics and the need for standards have landed in deaf ears. Evoking Paul Wouters' and Peter Dahler-Larsen's introductory and keynote lectures, she argued that the preferred normative position of scientometrics ('We measure, you decide') and the tendency to provide upstream solutions no longer serve the double role of the field very well. As an academic as well as a regulatory discipline, scientometrics not only creates reliable knowledge on metrics, but also produces social technologies for research governance. As such, evaluative metrics attain meaning in a certain context, and they also help shape that context. Though parts of the community now acknowledge that there is indeed a 'social' problem, ethical issues are often either conveniently bracketed off or ascribed to 'users lacking knowledge'. This reveals unease with taking any other-than-technical responsibility. Sarah plugged the idea of a short joint statement on proper uses of evaluative metrics, proposed at the international workshop at OST in Paris (12 May 2014; http://bit.ly/YsST6Y). She concluded with a plea for a more long-term reconsideration of the field's normative position. If the world of research governance is indeed a collective responsibility, then scientometrics should step up and accept its part. This would put the community in a much better position to actually engage productively with stakeholders in the process of developing good practices.
In the ensuing panel discussion, Stephen Curry (professor of Structural Biology at Imperial College, London, and member of HEFCE steering group) expressed a deep concern about the seducing power of metrics in research assessment and saw a shared, collective responsibility for the creation and use of metrics on the side of bibliometricians, researchers and publishers alike. Thus according to him technical and usage aspects of indicators should not be separated artificially.
Lisa Colledge (representing Elsevier as Snowballmetrics project director) talked about the Snowballmetrics initiative, and presented it as a bottom-up and practical approach with the goal to meet the needs of funding organizations and university senior level management. According to Lisa, while it primarily addresses research officers, feedback from the academic community of bibliometrics is highly appreciated to contribute to the empowerment of indicator users.
Stephanie Haustein (University of Montreal) was not convinced that social media metrics (a.k.a. altmetrics) lend itself to standardization due to heterogeneity of data sources (tweets, views, downloads) and their constantly changing nature. She stated that meaning of altmetrics data is highly ambiguous (attention vs. significance) and a quality control similar to the peer review system in scientific publications does not yet exist.
Jonathan Adams (Chief scientist at Digital Science) approved the idea of setting up a statement but emphasized that it would have to be short, precise and clear to also catch the attention of government bodies, funding agencies and senior level university management who are uninterested in technical details. Standards will have to live up to the fast-paced change (data availability, technological innovations). He was critical of any fixed set of indicators since this would not accommodate the strategic interests of every organisation.
Diana Hicks (Georgia Institute of Technology) presented a first draft of a set of statements (the "Leiden Manifesto"), which she proposed should be published in a top-tier journal like Nature or Science. The statements are general principles on how scientometric indicators should be used, such as for example, 'Metrics properly used support assessments; they do not substitute for judgment' or 'Metrics should align with strategic goals'.
In the ensuing debate, many participants in the audience proposed initiatives and problems that need to be solved. They were partially summarised by Paul Wouters who identified four issues around which the debate evolved. First, he proposed that a central issue is the connection between assessment procedures and the primary process of knowledge creation. If this connection is severed, assessments lose part of their usefulness for researchers and scholars.
The second question is what kind of standards are desirable. Who sets them? How open are they to new developments and different stakeholders? How comprehensive and transparent are or should standards be? What interests and assumptions are included within them? In the debate it became clear that scientometricians do not want to determine the standards themselves. Yet standards are being developed by database providers and universities, now busy building up new research information systems. Wouters proposed that the scientometric community sets as its goal to monitor and analyse evolving standards. This could help to better understand problems and pitfalls and also provide technical documentation.
The third issue highlighted by Wouters is the question of who is responsible. While the scientometric community cannot assume full responsibility for all evaluations in which scientometric data and indicators play a role, it can certainly broaden out its agenda. Perhaps an even more fundamental question is how public stakeholders can remain in control of the responsibility for publicly funded science when more and more meta-data is being privatized. Wouters pleaded for strengthening the public nature of the infrastructure of meta-data, including current research information systems, publication databases and citation indexes. This view does not deny the important role for for-profit companies who are often more innovative.  Fourth, Wouters suggested that taking these issues together provides an inspiring collective research agenda for the scientometrics community.
Diana Hicks' suggestion of a manifesto or set of principles was followed up on the second day of the STI conference at the annual meeting of ENID (European Network of Indicators Designers). The ENID assembly, and Ton van Raan as president, offered to play a coordinating role in writing up the statement. Diana Hicks' draft will serve as a basis, and it will also be informed by opinions from the community, important stakeholders and intermediary organisations, as well as those affected by evaluations. The debate on standardization and use will be continued in upcoming science policy conferences, with a session confirmed for the AAAS (San José, February) and expected sessions in the STI and ISSI conferences in 2015.
(Thanks to Sabrina Petersohn for sharing her notes of the debate.)



________________________________

Elsevier B.V. Registered Office: Radarweg 29, 1043 NX Amsterdam, The Netherlands, Registration No. 33156677, Registered in The Netherlands.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/sigmetrics/attachments/20140918/d7c45383/attachment.html>


More information about the SIGMETRICS mailing list