STI conference Leiden: metrics vs peer review
An extensive report on the conference, my presentation on the REF and gender initiatives
I have been researching and publishing in the field of bibliometrics and research evaluation since Publish or Perish was introduced in 2006 and even started a whole new research program on the Quality and Impact of Academic Research. However, as this had remained a "research hobby" with my five main research programs being in the field of International Management, I had never been able to attend a conference in this field. So when in September this year the Science and Technology Indicators conference was held in the Netherlands, only a short flight away, I thought I should take the opportunity to finally meet up with all the people whose work I had been reading in the last decade.
The conference location was Leiden, a city I had only ever visited once more than 30 years ago, but which was said to be very beautiful. Well that was certainly true! My new PhD student Divina Alexiou - who has lived in Leiden for a long time - gave me a tour of the city on the afternoon/evening before the conference: every single street we walked into was esthetically pleasing as well as interesting and steeped in history. My pictures of that walk are not very nice, but fortunately one of the conference partipants shared his pictures on Twitter and gave me permisison to post some of them. They give a whole new meaning to the word picture-perfect!
My conference presentation
At the conference, I presented an update of one my most controversial papers, which to date had only been published as a white paper on my website. Entitled "Running the REF on a rainy Sunday afternoon: Can we exchange peer review for metrics?" it makes the case that we should let metrics do the "heavy lifting" in the UK REF [Research Excellence Framework]. I show that a university-level ranking based on metrics (Microsoft Academic citations for all papers published with the university's affiliation between 2008-2013) correlates at 0.97 with the The REF power rating taken from Research Fortnight’s calculation. Using metrics to distribute research-related funding would free up a staggering amount of time and money and would allow us to come up with more creative and meaningful ways to build in a research quality component in the REF.
From insidious competition to supportive research cultures
Moreover, by using metrics we would all win back the time we waste on evaluating individuals and papers for REF submission, a soul-destroying activity in the first place. Instead, we could spend our “evaluation time” where it really matters, i.e. in reading papers and funding applications of our (junior) colleagues before their submission, and in reading their cases for tenure and promotion. More generally, rather than spending such a significant part of our academic lives giving “scores” to people and papers and finding faults in their work, why don’t we spend more of our time truly mentoring and developing junior academics. As someone who is performing exactly this role at Middlesex University for some 50 junior academics, I can assure you it is also a far more interesting and rewarding job! So from an environment where we all compete against each other to get that elusive 4* hit, let’s move to an environment where we support each other to do meaningful research (see also Adler & Harzing, 2009 and Return to Meaning: A Social Science with Something to Say).
Picture by Pieter Kroonenberg
The full slides of my presentation can be downloaded here. The other papers in my session were unusually well matched as they also dealt with macro-level research evaluations, so I am including links to these papers below. Moreover, I have listed two papers by Siverten and Pride/Knoth that I am referring to in my presentation. Both were published around the same time as my paper and come to very similar conclusions:
- Anne-Wil Harzing: Running the REF on a rainy Sunday afternoon: Can we exchange peer review for metrics?
- Ludo Waltman: Responsible metrics: One size doesn’t fit all
- Michael Ochsner, Emanuel Kulczycki & Aldis Gedutis: The Diversity of European Research Evaluation Systems
- Gunnar Sivertsen: Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective
- David Pride & Petr Knoth: Peer review and citation data in predicting university rankings, a large-scale analysis
The first conference with 100% female chairs?
As is common for most conferences, the program was made up of 20 special track sessions, that had been predefined in the call for papers, and 28 cluster sessions. Uniquely, although the tracks were led by a mix of male and female academics, the conference organisers had ensured that all 28 cluster sessions were chaired by women. And guess what: some women did a great job, some were just good, some were so-so and others were not so good. Pretty much like male chairs really; yes we are all different :-). Thanks so much to the organizers for doing their bit in helping academics to see women as individuals rather than as "representatives of their gender".
If at the next conference you could also ensure a more equal representation on your panels, all of us at CYGNA=SWAN=Supporting Women in Academia Networks will love you!! [see here for our last meeting at Middlesex University on internal vs external promotion]
Chairing a session
I was one of the 28 women chairing a cluster session. As my session was squeezed in between an (overrunning) keynote speech and a back-to-back cluster session in the same room I had told all the presenters that they really needed to stick to their allotted time slot of only 15 minutes. I might well have scared the hell out of some of them by contacting them before the conference.
Picture by Pieter Kroonenberg
But I needn't have worried, they were all excellent presenters and stuck to their slot without me even having to remind them of the timing. Great job Lin Zhang, Enrique Orduna-Maea, Gita Ghiasi, and Senay Yasar Saglam! Their papers are all linked here:
- Measuring diversity of research output: do the authors’ field classification and the reference list approaches converge?
- Classic papers: using Google Scholar to detect the highly-cited documents
- Gender homophily in citations
- Cut Your Bootstraps: Use a Jackknife
Related YouTube video
Related blogposts
- An Australian "productivity boom"? ... or maybe just a database expansion?
- To rank or not to rank
- Transcending the (non)sense of academic rankings
- Citation analysis for the Social Sciences: metrics and data-sources
- Health warning: Might contain multiple personalities
- Why metrics can (and should?) be used in the Social Sciences
- The mystery of the phantom reference: a detective story
- Microsoft Academic is one year old: the Phoenix is ready to leave the nest
More related videos
Copyright © 2023 Anne-Wil Harzing. All rights reserved. Page last modified on Sun 27 Aug 2023 07:32
Anne-Wil Harzing is Emerita Professor of International Management at Middlesex University, London and visiting professor of International Management at Tilburg University. She is a Fellow of the Academy of International Business, a select group of distinguished AIB members who are recognized for their outstanding contributions to the scholarly development of the field of international business. In addition to her academic duties, she also maintains the Journal Quality List and is the driving force behind the popular Publish or Perish software program.