Research Impact 101

Provides an overview of six key issues in the assessment of research impact.

© Copyright 2022 Anne-Wil Harzing. All rights reserved. First version, 20 June 2022

Academics are experts in their own areas of research. Yet, 20 years of experience in research evaluation as an academic administrator and provider of the Journal Quality List and the Publish or Perish software have taught me that academics are just as likely to fall prey to myths and fallacies as the (wo)man in the street.

This is understandable. Academics simply don’t have the time or energy to engage with the latest developments in Bibliometrics or more broadly the “Science of Science”. Therefore, the rationale for my 15 years of “lay research” in the field of bibliometrics has been bridge the “Science of Science” discipline and the “academic in the street”. In doing so, I hope to empower individual academics to make the best possible case for their research impact.

This white paper provides you with a Research Impact 101 course, divided into six sections. It helps you to find your way through the maze of research impact, drawing on terminology that we are all familiar with in our own research. The first section disambiguates different areas of research impact by focusing on three specific academic roles. The five remaining sections focus primarily on academic research impact.

Define and operationalize constructs

It should be easy, shouldn’t it? Impact is impact is impact. Well, no… When talking about research impact, academics often have very different understandings of the concept. Worse still, they might not even be aware that different interpretations are possible. As a result, any discussion about research impact soon descends in a Babylonian speech confusion. It also means that many academics are struggling to evidence research impact, for instance when making a case for promotion (Academic promotion tips (1) – What makes a successful application). So, what is impact?

What is impact?

The Oxford dictionary defines impact as “a marked effect or influence”. Research impact thus means that our research has affected or influenced something or someone. Unfortunately, this immediately throws up even more questions:

  • Whom has it impacted, i.e., who is the target audience?
  • How has it made an impact, i.e., what was its ultimate goal?
  • Through what means has this impact occurred, i.e., what are the primary outlets?
  • How do we know this impact has occurred, i.e., how can we measure it?

The answer to these four questions very much depends on the specific academic role we are looking at (See Table 1).

Table 1: How research impact differs by academic role

HDI = United Nations Human Development Index
SDG = United Nations Sustainable Development Goals

Research role

In the research role of our academic jobs, our target audience is other academics, and our ultimate goal is to progress scientific knowledge through the incorporation of our work in the scholarly body of knowledge.

We connect with other academics through academic journals, research monographs, and conferences. The preferred types of outlets are highly dependent on disciplinary norms and preferences. In the Life Sciences and Natural Sciences academic journals “rule”. In the Humanities and some of the Social Sciences books are still a popular medium (Own your place in the world by writing a book). In some Engineering disciplines, conference papers are a popular way of diffusing knowledge quickly.

Whether or not we have influenced other academics is typically measured by citations in academic journals, books, and conference papers. Although we know that academics are sometimes careless in their referencing (Are referencing errors undermining our scholarship and credibility?), and there are many reasons to cite papers (On the shoulders of giants? Motives to cite in management research) one would normally expect citations to signify at least some level of impact on the citing academic. Citations can also be field normalized to account for differences in citation practices across disciplines (From h-index to hIa: The ins and outs of research metrics).

Teaching role

As academics, we all have a tremendous impact on our students. Obviously, some of this impact will be unrelated to the research we do. However, in any good university, research feeds into the classroom and students benefit from research-informed teaching, allowing them to develop critical thinking skills.

As academics, we facilitate this directly through prescribing our own and other academics’ research as course readings. But in many cases, we will need to “translate” our research to make it more accessible for a student audience. We do so through publishing textbooks or articles in practitioner journals such as Harvard Business Review.

So, how do you know whether your research is used beyond your own classroom? You can find out if your research is cited in textbooks by using Google Books. To discover whether your publications (academic articles, textbooks or practitioner articles) are listed in teaching syllabi, Open Syllabus Explorer (Open Syllabus Explorer: evidencing research-based teaching?) is an incredibly useful tool.

Engagement role

So far, we have discussed the two key functions of any university: research and teaching. But there is also a third function: external engagement. This captures the impact of our research on industry, government, and the public/society at large, with the ultimate goal being to address key societal problems. It is the kind of impact incorporated in Impact Case Studies in the UK Research Excellence Framework (REF), defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment, or quality of life, beyond academia”.

Making our research accessible to an audience outside academia typically means “translating” our research for non-academic use by writing up articles for practitioner / professional journals and magazines, as well as publishing policy reports. It may also involve the use of (social) media, not just to diffuse our already published work, but also to allow continued engagement of non-academic audiences in our research. Integration of non-academic stakeholders from inception of research projects is increasingly common in the Social Sciences.

Measuring this type of impact is challenging. Recent efforts like Overton – a database that captures policy impact by tracking citations in policy documents – might help. Academics may also be able to evidence changes in government policy or legislation that are linked to their research. A more general case for impact can be made by referring to frameworks such as the United Nations Human Development Index and Sustainable Development Goals.

Horses for courses

Different universities have very different strategies for research impact, especially regarding the research and engagement roles. In the 2021 REF, the national research evaluation in the UK, the field of Business & Management showed a very different ranking of universities depending on which of the constituent criteria you focused on. London Business School, London School of Economics, and University College London excelled in the perceived quality of their publications, but only had an average performance when looking at their societal impact.In contrast, Middlesex University, the University of Westminster – both modern universities – and SOAS (the School of Oriental and African Studies) topped the list for societal impact. However, they only had an average performance when looking at the perceived quality of their publications.

In sum: measuring impact

It is almost impossible to measure the ultimate effect for these three areas of research impact – progressing scientific knowledge, developing critical thinking, and addressing societal problems – accurately and comprehensively by using citations or other quantitative metrics alone. On the other hand, relying only on testimonials and a fully narrative approach to establishing research impact is unlikely to convince audiences either. So in evidencing impact – for instance when making a case for promotion – academics are advised to rely on a combination of metrics, qualitative evidence such as testimonials, and career narratives.

For a great summary of impact indicators, see this 27-page taxonomy. It was created by Amanda Cooper and Samantha Shewchuk and presents an overview of more than 400 indicators for the Social Sciences and Humanities. The authors gathered these indicators from more than 100 research impact resources in more than 32 countries and sorted them into six categories: Scholarship, Capacity building, Economy, Society and Culture, Practice, and Policy.

Research impact, research quality and research reputation

After defining the construct of research impact, we also need to carefully distinguish it from other, related, constructs, most importantly research quality and research reputation. The figure below attempts to sketch this relationship, focusing on academic research impact.

Figure 1: The relationship between research quality, (academic) research impact and research reputation

Research quality and research impact

Academic research impact – typically measured by citations – is often seen as a measure of research quality, and the two concepts are undoubtedly related. In my white paper “the four C’s of getting cited“, I have argued that competence (quality) is the first of the four C’s. It is also a conditio-sine-qua-non. Without a minimum level of quality, the three other C’s (collaboration, care, and communication) will have little impact on citations. Exceptions can always be found, but on average shoddy work will attract few citations and high-quality, meaningful work is more likely to be cited.

There are, however, many elements of research quality that might not be captured by research impact, operationalized as citations. These elements are mainly related to the research process. They include the rigour with which the research has been conducted, the abstention from questionable research practices, and the extent to which Open Science practices have been followed. But research quality should also incorporate the extent to which research cultures are inclusive of different perspectives and different demographics.

Research communication

Research communication is a very important facilitator (or in academic terms: a mediator) to “translate” research quality into research impact. Remember though, it is a mediator only, it should not be an end in itself! Research communication includes presenting your work at conferences and research seminars (or webinars). Increasingly, however, it also includes engaging with social media, for instance through blogging about your work and sharing it on platforms such as ResearchGate, Twitter and LinkedIn.

Many academics will pick up on your articles without being specifically informed about them. This may happen through communications from academic journals / publishers as well as their own search strategies. However, with increasing levels of publication output, a growing number of new journals, and more interdisciplinary research, these signals are much weaker than they were in the past. So, research communication is essential if you want your research to reach the right audience (academic or non-academic). See also: Improve your Research Profile (7): Follow the 7 steps for impact.

Research reputation and evaluation bias

Combining research quality and research impact with research communication is likely to create a strong research reputation. Although our emphasis in this white paper is on individual academics, this relationship also holds for journals, universities, and countries. However, as is shown in Figure 1, the strength of the relationship and the exact operationalization of the various concepts might vary for different levels of analysis and different disciplines.

Unfortunately, at the heart of the whole process also lies a major distortion to the “objective” measurement of the different concepts. Every variable and relationship in the model can be subject to bias. The same research quality is often evaluated differently depending on the academic’s demographics. The extent to which research quality leads to citations is subject to these same biases, as is the extent to which research communication leads to reputation building. Even the relationship between research reputation and research resources is fraught with bias. Although most of my own experience and advocacy has been in the area of gender bias there are strong effects in the other areas listed in the figure (race, nationality, age, accent, class, sexuality, disability) too.

Avoid strawman arguments

As academics we are used to evaluating a problem from different, and even opposing, perspectives. Academic research – especially research in the Social Sciences – is never about black and white, it is about the various shades of greys. Ask an academic a question – any question – and their most likely answer will be: “it depends”. So, we take great care in avoiding black-and-white views and straw man arguments, i.e., the distortion of an opposing stance to make it easier to attack. And yet… I do regularly see straw man arguments used in the assessment of research impact. 

Peer review: ideal and reality

When assessing research quality most academics prefer peer review over metrics. They see peer review as the “Gold Standard” and often oppose strongly to the use of metrics. However, in doing so they often contrast an ideal view of peer review with an overly reductionist – strawman – view of metrics, making it easy to reject metrics out of hand. In an ideal world, peer review consists of evaluation by informed, dedicated, and unbiased experts, who have unlimited time at their hands.

However, it is doubtful this golden world ever existed; it certainly doesn’t in today’s pressured academic world. Peer review is far from perfect. The likely reality of peer review is an evaluation carried out by hurried semi-experts whose assessment of research quality is – consciously or subconsciously – influenced by the journal in which the article was published, the country in which the research was conducted, the author’s university affiliation, as well as their demographic characteristics such as gender and race.

Metrics: reductionistic or inclusive

Metrics are often defined in reductionist terms, focusing only on the Web of Science as a data source and the journal impact factor or the h-index as metrics. Using these definitions, metrics are easily discarded for “ideal world” peer review. However, a more inclusive data source and metrics might compare far better to the likely reality of peer review.

It is true that the Web of Science doesn’t cover many publication outlets important in the Social Sciences and Humanities. But that simply means you need to use another data source that does, such as Scopus or Google Scholar (Citation analysis for the Social Sciences: metrics and data-sources). I have shown that if you do, academics in the Social Sciences match the performance of academics in the Natural and Life Sciences.

Indeed, the journal impact factor has many flaws (see also Avoid the ecological fallacy: level of analysis), and h-index and citations cannot be compared across disciplines. There is a whole cottage industry of publications pointing this out over and over again. But why throw the baby out with the bathwater? Simply use field-corrected metrics.

Yes, citations do obviously grow with academic age, so younger and older scholars cannot be compared directly. Again, the solution is simple: correct for career stage. The individual annualized h-index which I have created corrects for both age and disciplinary differences (From h-index to hIa: The ins and outs of research metrics). As you can see in Figure 2, using an inclusive data source and an inclusive metric changes disciplinary comparisons.

Figure 2: Average hIa per academic for five different disciplines in three different data bases, July 2015 (source Harzing & Alakangas, 2016)

My choice?

If I had to choose between inclusive metrics and real-world peer review, my bet would be on metrics. Judging from the user surveys I get for Publish or Perish I am not the only one! Many academics use it to combat what one of them called “academic buffoons”, academics whose power and influence is based on connections, favourable demographics, and strong impression management skills, rather than actual research performance.

Now I am not saying that citations metrics are immune to bias, metrics are subject to many of the same biases that are present in peer review. But at least metrics are based on the “crowd-sourced opinion” of a large cross-section of academics, rather than the view of a smallish group of academics who – in many cases – belong to the privileged majority.

So rather than resorting to the black and white, let’s do what academics do best: explore the greys and consider carefully when metrics might be helpful, either in combination with peer review or on their own. We really need to go back to our academic roots: “it depends” should be our standard answer to the question "peer review or metrics?" too.

Test empirically – no anecdata please

The use of strawman arguments is frequently paired with reliance on what are called anecdata: anecdotal evidence based on personal observations or opinions, or random investigations, but presented as facts. As academics we are quick to condemn the (wo)man in the street for using anecdata when talking about topics such as migration, vaccination, and climate change.

At the same time though, many academics seem happy to use anecdata to support their arguments that metrics should be condemned to the scrapheap. The argument usually goes something like this: “My best article isn’t my most highly-cited article, so all metrics are flawed” or “So-and-so published a really bad study, but it is on a sexy topic, so it is highly cited. This proves that citations can never be trusted”. To me this is the equivalent of “My grandma smoked a packet of cigarettes a day and lived to be 90”. Yes, the anecdote might well be true, but that doesn’t mean that it can be generalized to every individual paper or every individual academic.

As academics we know this, don’t we? So can we please do what we do best and read or conduct some actual research before proclaiming to know what we are talking about. Large-scale empirical studies do point to metrics being highly correlated with peer review, promotions, and external measures of esteem such as prestigious awards. Yes, there are always exceptions, but we would not question one of our core theories based on one outlier, would we? Ever heard of the saying: The exception proves the rule?

Avoid the ecological fallacy: level of analysis

As academics in the Social Sciences, we are very familiar with different levels of analysis. We analyse data at the individual, dyadic, team, organizational, industry and country level, often incorporating multiple levels in one study. Likewise, impact can (and should) be measured very differently at different levels of analysis: individual academics, journals, and universities. When evaluating individuals, we should not use metrics designed for journals or universities.

So, don’t use journal impact factors to evaluate an individual academic’s article. Not every publication in a journal with a high impact journal will be highly cited, some publications published in low-ranked journals have a lot of citations. Out of my top-10 most cited research outputs, only three were published in high-impact journals. Four were published in low-impact journals, two were books and my most highly cited contribution is a software programme (Publish or Perish). Anecdata, I know, but sometimes it is nice to use an example…

Likewise, don’t use a university’s reputation to assess an individual academic’s standing. It is like using the class average to evaluate an individual pupil. Yes, mathematically the chances of an individual having a higher score are higher when the average is higher, but academic performance tends to be highly skewed. So, knowing the average might tell you very little. We know this don’t we? So why not apply it in our own profession?

Not every academic working in one of the top-ranked universities is a genius; lower-ranked universities might have strong pockets of excellence. Academics might also have very different motivations at different stages of their careers. After climbing the academic career path at a world top-30 university, I consciously decided to switch to a “lower-ranked” university. University prestige (and salary) was less important to me than the ability to shape my own job in supporting promising early career researchers and creating supportive, collaborative and inclusive research cultures.

Research process versus research outcome?

So far, our discussion has mainly focused on research outputs. However, in the production of impactful research, the research process is as important as the resource outcome (Relevant & Impactful Research: From words to action - From outcome to process). Yes, it is important to address some of the most pressing problems of our times such as climate change, growing inequality, or migration.

But what good does this do if the research process involved questionable research practices or even outright research misconduct? What good does this do if the outcomes were only achieved through exploitation of a precarious workforce or through bullying and/or (sexual) harassment? What good does this do if junior collaborators and co-authors were not given due recognition?

What good does this do if these results were achieved through egocentrically focusing on one’s own research only, neglecting the many other academic roles that are essential to our profession, including discretionary service activities such as engaging in (constructive!) peer review, writing references, mentoring junior academics and more generally creating positive academic cultures.

It is hard to measure some of these less “tangible” and less “visible” aspects of the research process. But that is where a movement such as Humane Metrics Initiave comes in. I can highly recommend their recent report: Walking the Talk: Towards a Values Aligned Academy. It is also encouraging that many academic institutions are explicitly starting to recognize the importance of collegiality in their promotion guidelines.

And finally… a word of caution

Let’s ensure that research impact is not seen as “yet another tick-box” on a long and constantly growing list of performance indicators. In today’s academic world, it seems we need to be ground-breaking researchers, publishing constantly in the top journals and bringing in bucket loads of research funding, inspiring teachers who are not only entertaining and enlightening their students, but are also scrupulously fair whilst caring for students’ individual differences and providing pastoral care. On top of that many of us need to be effective, efficient, politically astute, and inclusive academic administrator or managers.

Increasingly, we also need to pay attention to our research profile, reputation, and impact, ensuring our work not only has academic impact in the form of citations, but also external impact through external engagement. Whilst I fully agree that all of these activities are important in academia, we cannot expect every academic to do all of these things equally well. Indeed, universities need to fulfill all these activities as a collective, but that doesn’t mean every single academic needs to do all of these, certainly not at every single stage of their career.

If we want academics to get serious about research impact, we need to ensure they are intrinsically motivated to achieve these outcomes. Most academics truly want to have an impact, they truly want to make a difference in the world. But many universities are still treating funding (input) and publications (throughput) as the final “end product” of our research. So, I suggest we dramatically reduce performance expectations for funding and publications to allow academics to create real impact, using responsible and inclusive research processes.

Resources

Find the resources on my website useful?

I cover all the expenses of operating my website privately. If you enjoyed this post and want to support me in maintaining my website, consider buying a copy of one of my books (see below) or supporting the Publish or Perish software.

Aug 2022:

Only
 £5.95...
Nov 2022:

Only
 £5.95...
Feb 2023:

Only
 £5.95...
May 2023:

Only
 £5.95...
August 2023:

Only
 £9.99...