Inclusive academia (2): Inclusive research evaluation

Second of eight posts on my Irish Academy of Management Distinguished Scholar Interview

In October 2022 I received the happy news that I was elected as the 2022 Irish Academy of Management Distinguished International Scholar. I received a beautiful glass sculpture and was interviewed by the amazing Alma McCarthy on the topic Towards a more inclusive and proactive academia. This post gives an overview of my work in inclusive research evaluation.

In research evaluation I have been working in three different areas: journal quality, the publish or perish software, and research on data sources and citation metrics. All these initiatives were primarily meant to create a more equal playing field for academics in different disciplines, and in particular the Social Sciences. However, in many cases they have also led to more inclusivity at the level of individual academics who do not belong to the dominant group, as well as for universities with different missions.

So how did the Journal Quality list come about?

In my first permanent full-time academic job at the University of Bradford I was an ECR member of the research committee. The Business School evaluated academics using a very old Dutch journal ranking, with journals ranked A (high) to D (low). It was heavily dominated by Economics, all Business journals ranked very low. The top journal in my field, Journal of International Business Studies, was ranked C, the second journal back then – Management International Review – was ranked D. Others weren’t even included. So, it was soon clear to me I was going to have a hard time to get promoted there.

Therefore, I volunteered to collect journal rankings from other universities to substantiate my argument that we really needed to adjust our ranking, which we did in the end. I spent an awful lot of time on this, creating a full spreadsheet with all the rankings. Then I thought, well… if this helps me why not see whether it can help others too. So, I uploaded the list, which I called the Journal Quality List, on my own website that I had just established in 1999. Now, 23 years later, the list is in its 69th edition, with 10 different rankings included. Over the years has been attracting an average of 40,000-45,000 page visits a year.

What is most important though is that the whole idea of the Journal Quality list was diversity, not to create “one list to rule them all”, but rather to provide a variety of rankings from different countries. This allowed users to choose the ranking that suited their circumstances best.

How about the Publish or Perish software?

Publish or Perish is a free software programme which was created in 2006 to conduct citation analysis. It is now used for many purposes, including literature reviews, finding the right journal to submit your paper to, doing bibliometric analyses, preparing for job interviews, writing laudatios or obituaries, or doing some homework before meeting your academic hero. As far as we can estimate, it currently has more than a million users.

The reason for creating it was – like the JQL – very pragmatic. In 2006 my promotion application for full professor at the University of Melbourne was unsuccessful as I hadn’t published enough in A journals. However, I knew that my work had had significant impact, despite not being published in A-listed journals. But as many IB journals were not listed in the Web of Science at the time I could not substantiate this. Google Scholar had a far more inclusive coverage, including not only all academic journals, but also books, book chapters and other types of publications. However, its web interface didn’t allow for easy searching for individuals.

That’s why in 2006 my husband created a little software program for me to draw data from GS and calculate a few citation metrics, including the then just introduced h-index. Again, I thought, well if it helps me, it might be able to help others too. So, I uploaded it on my website too. Now 16 years, 8 major versions and hundreds of minor versions later, it covers nine data sources and has more than a million users and is still free. Ironically, the software has also become my most cited application, with nearly 1500 official citations in Google Scholar, but another 5,000 academics have used it in their academic articles without referencing it in the list of references.

Although it now provides access to 8 data sources, the software initially focused on Google Scholar only. As this had much better coverage in the Social Sciences and Humanities than other data sources it promoted inclusivity in terms of disciplines. However, the more comprehensive coverage of Google Scholar also levelled the playing field for universities focusing on more applied research and for academics publishing in languages other than English.

The ready availability of citation data also helped exposing those academic systems that were not meritocratic; it has been used in several countries – including Italy, Greece, Poland, and Romania – to expose nepotism. Individual users – and particularly female academics – also tell me they find it helpful to expose what they call “academic buffoons”, academics whose image management skills exceed their research skills.

Although I do recognise the problems with non-experts doing bibliometric analyses, there is no doubt that Publish or Perish has democratised citation analysis and has increased transparency, which is a first step towards more inclusivity.

Research in bibliometrics

The Publish or Perish software, combined with my role as Research Dean at the University of Melbourne also led me to start doing research into bibliometrics in the Social Sciences. I noticed that nearly all bibliometric research focused on the Sciences and Life Sciences, because Web of Science coverage was not sufficient in other disciplines. So, I started to do research on disciplinary coverage in Google Scholar and later in many other data sources too.

My research found that a fair and inclusive cross-disciplinary comparison of research performance is possible, provided we use Google Scholar or Scopus as a data source, and the individual annualised h-index — a h-index corrected for career length and co-authorship patterns that I introduced — as the metric of choice. My research also showed that the individual annualised h-index led to more inclusive rankings of individual academics in terms of gender, age, and disciplinary background.

Most recently I have started to become interested in how "peer review" in the form of reputation surveys is dominating university rankings, privileging established universities over new ones, such as post-92s in the UK, who have similar research metrics. For instance, in the 2022-2023 US News ranking of Global Universities Middlesex University ranked 10th in the UK in Economics & Business in terms of normalised citation impact, but only 32 in reputation.

In the QS Business & Management ranking, my former university – the University of Melbourne – and Middlesex University are ranked identically on citations, but as Melbourne’s reputation scores – which count for 80% in QS – are so much higher than those of Middlesex, Melbourne ranks 26, whereas Middlesex languishes between 351-400. I think these reputation surveys simply entrench existing hierarchies and privilege in the academic system, making it harder for new universities to establish themselves. So, I am planning to do further research on this soon.

Related videos

Other posts in this series

Find the resources on my website useful?

I cover all the expenses of operating my website privately. If you enjoyed this post and want to support me in maintaining my website, consider buying a copy of one of my books (see below) or supporting the Publish or Perish software.

Aug 2022:

Only
 £5.95...
Nov 2022:

Only
 £5.95...
Feb 2023:

Only
 £5.95...
May 2023:

Only
 £5.95...

Related blogposts

positive academia academia behind the scenes academic etiquette cygna gender administration office politics research evaluation research metrics geographic diversity