Running the REF on a rainy Sunday afternoon: Do metrics match peer review?

Short summary of white paper that proposes replacing the REF with a metrics-based exercise

In the early part of the 21st century I spent many many hours in RAE (Research Assessment Exercise) committees, preparing for the University of Bradford's submission to RAE 2001. As a junior academic I initially felt honoured to be asked to participate in this kind of "strategic" work. However, it didn't take long for me to realise the futility of this exercise. Soon, I started to resent the way in which all of this committee work fragmented my research time.

So when I moved to Australia in 2001, I was mightily relieved to be rid of these government research assessment exercises. Or so I thought... In March 2010, I accepted my Dean's request to become Associate Dean Research. Only later I discovered that my immediate predecessor had only lasted for a year; the work associated with the first Excellence in Research for Australia (ERA) exercise had been the final straw for him. As the new ADR, I experienced the joy of being responsible for the 2012 ERA exercise. To say I wasn't sorry to step down before preparation for the next one (ERA 2015) began is a bit of an understatement.  

Since returning back to the UK in 2014, my bewilderment at the enormous amount of time and money spent on the REF (Research Excellence Framework, the latest reincarnation of the RAE) has only grown. And given that, since 2013, I have started publishing more and more in the area of citation analysis, I thought it was time to use that expertise to propose an alternative to the REF. You can read the result here.

Abstract

This white paper focuses on a single key issue: the feasibility of replacing the cumbersome, convoluted, and contested REF process with a much simpler exercise based on metrics. Using the free Publish or Perish software to source citation data from Microsoft Academic (MA), I was able to create a citation-based ranking of British universities that correlates with the REF power ranking at 0.97. The whole process took me less than two hours of data collection and less than two hours of analysis.

So I suggest we replace the REF by metrics and use the time and money thus saved more productively. Let's stop wasting our time on evaluating individuals for REF submission or serving on REF panels that evaluate the “end products” of our colleagues' research activity. Instead, let's spend our “evaluation time” where it really matters, i.e. in reading papers and funding applications of our (junior) colleagues before submission, and in carefully evaluating their cases for tenure and promotion. Wouldn’t that create a far more positive and productive research culture?

Watch an interview on the Publish or Perish software

Related blog posts

Related YouTube video