STI conference Leiden: metrics vs peer review

An extensive report on the conference, my presentation on the REF and gender initiatives

I have been researching and publishing in the field of bibliometrics and research evaluation since Publish or Perish was introduced in 2006 and even started a whole new research program on the Quality and Impact of Academic Research. However, as this had remained a "research hobby" with my five main research programs being in the field of International Management, I had never been able to attend a conference in this field. So when in September this year the Science and Technology Indicators conference was held in the Netherlands, only a short flight away, I thought I should take the opportunity to finally meet up with all the people whose work I had been reading in the last decade.

The conference location was Leiden, a city I had only ever visited once more than 30 years ago, but which was said to be very beautiful. Well that was certainly true! My new PhD student Divina Alexiou - who has lived in Leiden for a long time - gave me a tour of the city on the afternoon/evening before the conference: every single street we walked into was esthetically pleasing as well as interesting and steeped in history. My pictures of that walk are not very nice, but fortunately one of the conference partipants shared his pictures on Twitter and gave me permisison to post some of them. They give a whole new meaning to the word picture-perfect!

My conference presentation

At the conference, I presented an update of one my most controversial papers, which to date had only been published as a white paper on my website. Entitled "Running the REF on a rainy Sunday afternoon: Can we exchange peer review for metrics?" it makes the case that we should let metrics do the "heavy lifting" in the UK REF [Research Excellence Framework]. I show that a university-level ranking based on metrics (Microsoft Academic citations for all papers published with the university's affiliation between 2008-2013) correlates at 0.97 with the The REF power rating taken from Research Fortnight’s calculation. Using metrics to distribute research-related funding would free up a staggering amount of time and money and would allow us to come up with more creative and meaningful ways to build in a research quality component in the REF.

From insidious competition to supportive research cultures

Moreover, by using metrics we would all win back the time we waste on evaluating individuals and papers for REF submission, a soul-destroying activity in the first place. Instead, we could spend our “evaluation time” where it really matters, i.e. in reading papers and funding applications of our (junior) colleagues before their submission, and in reading their cases for tenure and promotion. More generally, rather than spending such a significant part of our academic lives giving “scores” to people and papers and finding faults in their work, why don’t we spend more of our time truly mentoring and developing junior academics. As someone who is performing exactly this role at Middlesex University for some 50 junior academics, I can assure you it is also a far more interesting and rewarding job! So from an environment where we all compete against each other to get that elusive 4* hit, let’s move to an environment where we support each other to do meaningful research (see also Adler & Harzing, 2009 and  Return to Meaning: A Social Science with Something to Say).

Picture by Pieter Kroonenberg

The full slides of my presentation can be downloaded here. The other papers in my session were unusually well matched as they also dealt with macro-level research evaluations, so I am including links to these papers below. Moreover, I have listed two papers by Siverten and Pride/Knoth that I am referring to in my presentation. Both were published around the same time as my paper and come to very similar conclusions:

The first conference with 100% female chairs?

As is common for most conferences, the program was made up of 20 special track sessions, that had been predefined in the call for papers, and 28 cluster sessions. Uniquely, although the tracks were led by a mix of male and female academics, the conference organisers had ensured that all 28 cluster sessions were chaired by women. And guess what: some women did a great job, some were just good, some were so-so and others were not so good. Pretty much like male chairs really; yes we are all different :-). Thanks so much to the organizers for doing their bit in helping academics to see women as individuals rather than as "representatives of their gender".

If at the next conference you could also ensure a more equal representation on your panels, all of us at CYGNA=SWAN=Supporting Women in Academia Networks will love you!! [see here for our last meeting at Middlesex University on internal vs external promotion]

 

Chairing a session

I was one of the 28 women chairing a cluster session. As my session was squeezed in between an (overrunning) keynote speech and a back-to-back cluster session in the same room I had told all the presenters that they really needed to stick to their allotted time slot of only 15 minutes. I might well have scared the hell out of some of them by contacting them before the conference.

Picture by Pieter Kroonenberg

But I needn't have worried, they were all excellent presenters and stuck to their slot without me even having to remind them of the timing. Great job Lin Zhang, Enrique Orduna-Maea, Gita Ghiasi, and Senay Yasar Saglam! Their papers are all linked here:

Related YouTube video

Related blogposts

More related videos