The individual annualised h-index: an ecologically rational heuristic?
Studies a matched sample of University of Melbourne (Associate) Professors to test the hIa as an ecologically rational metric that is unbiased across disciplines, career stage, and gender, as well as predictive for promotion, mobility and awards.
© Copyright 2024 Anne-Wil Harzing. All rights reserved. First version, 9 January 2024
Table of contents
Introduction
A single-minded focus on publication and citation metrics for the evaluation of research performance is problematic. There is no substitute for a careful, considered, unbiased, and qualitative evaluation of research quality by experts in the field.
However, time and expertise are not always available. Moreover, assessment is often – consciously or subconsciously – influenced by the journal in which an article was published, the country in which the research was conducted, the author’s university affiliation, as well as their demographic characteristics such as gender and race. Therefore, metrics are recommended as an aid to support decision-making in a range of academic decisions, such as performance evaluation and promotions.
Ecologically rational heuristics
However, these metrics need to be suited for the environment, i.e. they need to be ecologically rational heuristics (ERH). Heuristics – rules of thumb in lay language – are decision strategies that simplify decision-making by ignoring part of the information and focusing on one or a few dimensions. They oppose the common assumption that more knowledge or information is always better. Using relatively simple calculations, they are easy to explain to and understood by lay users.
Ecological rationality argues that the rationality of a decision is not universal; it depends on the specific circumstances in which it takes place. Key to effective decision-making is to achieve one's goals in a specific context. The concept of ecological rationality was introduced by Gerd Gigerenzer, who argues that heuristics are not irrational or always second-best to optimization, as long as they are fit for the decision context.
The ERH perspective was recently introduced in Bibliometrics by Lutz Bornmann who published several papers on the topic. Here ecological rationality means answering the question “when the use of specific bibliometric indicators leads to good decisions on performance and when it does not” (Bornmann, Ganser, & Tekles: 2022: 101237). However, to date these ideas do not seem to have gathered much traction in the bibliometric community, let alone in the broader area of “citizen bibliometrics”, bibliometrics practiced by research managers and scientists.
Peer review versus metrics
In this white paper I argue that the use of ecologically rational heuristics in academic decision-making is an excellent way to strike a balance between:
- The uncritical acceptance of metrics. This appears to be the position of many - though by no means all - professional bibliometricians. Their predominant interest seems to be in finding the “perfect metric”, with little interest in its practical application or barriers to acceptance. However, perfect metrics do not exist; different metrics may be useful for different environments. Moreover, even metrics that are mathematically perfect might not be practically useful for “citizen bibliometrics”.
- The blanket refusal to consider metrics, instead advocating peer review only. This is the position adopted by many academics, especially in the Social Sciences & Humanities, as well as by many key decision-makers in higher education. As I have argued elsewhere (see my white paper Research Impact 101) this position is often based on strawman comparisons and anecdata, as well as an idealised view of peer review.
Through my provision of the free Publish or Perish software since 2006 and publications in bibliometrics since 2008, I have aimed to provide a bridge between professional and citizen bibliometricians. During that time, it has always struck me how little space there seems to be for a position between these two extreme camps. This vacuum may have occurred because we do not think carefully enough about the purpose of metrics and how their usefulness varies with their purpose.
In this white paper, I argue that the individual annualised h-index – hIa for short – that we introduced 10 years ago in Publish or Perish and tested with a matched sample of academics, is an ecologically rational heuristic that can be used to effectively and efficiently evaluate research performance across career stages and disciplines. As such it can be used predictively to identify high performers at an early stage in their career and to conduct a fair and inclusive comparison of research performance across disciplines.
Table of contents
Introduction
A single-minded focus on publication and citation metrics for the evaluation of research performance is problematic. There is no substitute for a careful, considered, unbiased, and qualitative evaluation of research quality by experts in the field.
However, time and expertise are not always available. Moreover, assessment is often – consciously or subconsciously – influenced by the journal in which an article was published, the country in which the research was conducted, the author’s university affiliation, as well as their demographic characteristics such as gender and race. Therefore, metrics are recommended as an aid to support decision-making in a range of academic decisions, such as performance evaluation and promotions.
However, these metrics need to be suited for the environment, i.e. they need to be ecologically rational heuristics (ERH). Heuristics – rules of thumb in lay language – are decision strategies that simplify decision-making by ignoring part of the information and focusing on one or a few dimensions. They oppose the common assumption that more knowledge or information is always better. Using relatively simple calculations, they are easy to explain to and understood by lay users.
Ecological rationality argues that the rationality of a decision is not universal; it depends on the specific circumstances in which it takes place. Key to effective decision-making is to achieve one's goals in a specific context. The concept of ecological rationality was introduced by Gerd Gigerenzer, who argues that heuristics are not irrational or always second-best to optimization, as long as they are fit for the decision context.
The ERH perspective was recently introduced in Bibliometrics by Lutz Bornmann who published several papers on the topic. Here ecological rationality means answering the question “when the use of specific bibliometric indicators leads to good decisions on performance and when it does not” (Bornmann, Ganser, & Tekles: 2022: 101237). However, to date these ideas do not seem to have gathered much traction in the bibliometric community, let alone in the broader area of “citizen bibliometrics”, bibliometrics practiced by research managers and scientists.
Peer review versus metrics
This is unfortunate, as I argue that the use of ecologically rational heuristics in academic decision-making is an excellent way to strike a balance between:
- The uncritical acceptance of metrics. This appears to be the position of many professional bibliometricians. Their predominant interest seems to be in finding the “perfect metric”, with little interest in its practical application or barriers to acceptance. However, perfect metrics do not exist; different metrics may be useful for different environments. Moreover, even metrics that are mathematically perfect might not be practically useful for “citizen bibliometrics”.
- The blanket refusal to consider metrics, instead advocating peer review only. This is the position adopted by many academics, especially in the Social Sciences & Humanities, as well as by many key decision-makers in higher education. As I have argued elsewhere (see my white paper Research Impact 101) this position is often based on strawman comparisons and anecdata, as well as an idealised view of peer review.
Through my provision of the free Publish or Perish software since 2006 and publications in bibliometrics since 2008, I have aimed to provide a bridge between professional and citizen bibliometricians. During that time, it has always struck me how little space there seems to be for a position between these two extreme camps. This vacuum may have occurred because we do not think carefully enough about the purpose of metrics and how their usefulness varies with their purpose.
In this white paper, I argue that the individual annualised h-index – hIa for short – that we introduced 10 years ago in Publish or Perish and tested with a matched sample of academics, is an ecologically rational heuristic that can be used to effectively and efficiently evaluate research performance across career stages and disciplines. As such it can be used predictively to identify high performers at an early stage in their career and to conduct a fair and inclusive comparison of research performance across disciplines.
Heuristics in research evaluation
Bornmann et al. (2022) discuss two specific heuristics in their paper in Journal of Informetrics that may be useful in a research evaluation context. The first is “one-reason decision making”, also called “take-the-best heuristic”. This equates to the application of a single best metric. It is what I propose in this white paper, advocating the use of an individual annualised h-index.
The second heuristic that Bornmann discusses is the “recognition heuristic”, which suggests that if a decision-maker recognises the object that is evaluated, this object has a higher value with respect to the decision-making criterion – such as performance evaluation, or decisions about funding, paper acceptance, or citing a work - than an object that is not recognised.
In contrast to the idealised view of peer review, I argue that the reality of peer review also involves the use of heuristics and the use of the recognition heuristic in particular. This heuristic is (sub-consciously) applied in many performance evaluation decisions.
At the university level, in e.g. reputation surveys conducted by ranking organizations, it is likely to favour not only universities with long histories, such as Oxford or Cambridge, but also universities located in well-known cities or parts of cities (e.g., Amsterdam, Westminster) or those whose name reminds academics of famous universities such as Oxford Brookes or Nottingham Trent.
[Detour: My own university, Middlesex University, has a particularly unfortunate name in this context. It refers to a county that was abolished nearly 60 years ago – the Middle Saxon Province of the Anglo-Saxon Kingdom of Essex - we still carry the county’s code of arm as our logo. Even many British academics have no idea what our name refers to.]
The recognition heuristic may also be applied to academics when making decisions about journal submissions or which articles to cite. Academics with a strong “brand image“ and name recognition, like me, will profit from this. However, the recognition heuristic is likely to work against academics with non-Western and/or common names. Western academics (the currently still dominant group in academia) are often unable to remember these names and/or are unable to tell them apart.
The same recognition heuristic might also be applied to the evaluation of articles - and thus ultimately their authors - through the journal/university recognition heuristic. Articles published in top journals by academics affiliated with top universities are likely to receive a higher evaluation, regardless of their actual quality. Moreover, the recognition heuristic may also be applied to research contexts, such as industries or countries. Contexts that match those of key decision-makers, typically academics in Western countries studying for-profit organisations, may receive higher evaluations.
Finally, a variant of the recognition heuristic may also be at play when evaluating academics (or their papers), operating through demographic characteristics. Academics who don’t meet the “traditional” stereotype – Western white male – of a scientist, i.e. are not “recognised” as legitimate occupants of this role, are likely to be assigned a lower value than those that do.
Peer review thus includes heuristics that are similar in function to metrics, i.e., they are “reductionist signals” of an underlying concept. However, the recognition heuristic in peer review has a strong potential for biased decision-making and a reinforcement of the Matthew principle. Therefore, the use of the recognition heuristic in research evaluation should be strongly discouraged as it privileges subjective rather than objective performance.
At the same time, metrics are a form of peer review too. Citations result from peers reviewing an article and finding it useful enough to cite. Hence, instead of seeing metrics and peer review as two irrevocably opposed camps, I suggest the use of ecologically rational heuristics such as the hIa to provide a quantitative, and relatively objective, foundation for the inherently more qualitative and oftentimes subjective peer review.
Conditions for an effective ERH metric
There are two key conditions that the “take-the-best” heuristic would need to comply with in the context of research evaluation and career decision points in academia. It would need to:
- Allow for a fair comparison across disciplines and career stage and be unbiased in demographic characteristics such as gender.
- Display predictive power for typical decision points in academic careers, such as promotion and mobility, as well as the allocation of awards.
In this white paper, I explore the usefulness of a specific metric in this context: the individual annualised h-index. I explore the two criteria in detail and find that the hIa performs well as an ecologically rational metric. However, before reporting these results, the next section describes the methods applied.
Methods
Sample
In bibliometric analyses, there are two different choices that ensure generalizable results. First, using a big sample size with heterogeneous data. This ensures that individual idiosyncrasies will be averaged out. Second, using a much smaller sample size with purposefully matched sampling. This controls for extraneous effects and focuses on the variables of interest, in this case discipline, career stage, and gender. In our original test of the usefulness of the hIa, published in Scientometrics as Harzing, Alakangas & Adams (2014) we decided on the latter option.
We used data for associate professors and full professors at the University of Melbourne, one of the world’s top-40 universities according to the THE World University ranking. Our sample includes two associate professors and two full professors in each of the 37 disciplines that are represented at this university, grouped into five major disciplinary fields. The final sample consisted of 146 academics; two professors in Law and Physics were removed from the final sample as their publication patterns were very uncharacteristic of their field. The much larger number of observations in the (Life) Sciences reflects the dominance of these disciplines at the University of Melbourne.
- Humanities: Architecture, Building and Planning; Culture and Communication; History; Languages and Linguistics; Law (19 observations),
- Social Sciences: Accounting and Finance; Economics; Education; Management and Marketing; Psychology; Social and Political Sciences (24 observations),
- Engineering: Chemical and Biomolecular Engineering; Computing and Information Systems; Electrical and Electronic Engineering; Infrastructure Engineering; Mechanical Engineering (20 observations),
- Sciences: Botany; Chemistry; Earth Sciences; Genetics; Land and Environment; Mathematics; Optometry; Physics; Veterinary Sciences; Zoology (39 observations),
- Life Sciences: Anatomy and Neuroscience; Audiology; Biochemistry and Molecular Biology; Dentistry; Obstetrics and Gynaecology; Ophthalmology; Microbiology; Pathology; Physiology; Population Health (44 observations).
Within each sub-discipline, individuals were randomly selected, although a preference was given to individuals with unique names to avoid author disambiguation problems. Where possible, one male and one female academic were selected at each level. In some disciplines this proved to be infeasible, because of the shortage of female academics at senior levels.
The original research project (Harzing & Alakangas, 2016) compared three data sources (Web of Science, Scopus, and Google Scholar) and search queries were refined on an iterative basis through a detailed comparison of the results for the three databases. This white paper only uses the Google Scholar data. Data were collected through Publish or Perish, a free software programme that retrieves and analyses academic citations and calculates a variety of metrics that can be exported to Excel for further analysis.
Our sample provides an excellent test case as all academics work for the same university, a university that displays excellence in all its disciplines. Thus (associate) professors in the various disciplines are expected to display similar performance levels.
The ability to provide a reliable comparison across disciplines is further enhanced by the fact that this university has very formalised, standardised, and centralised procedures for internal promotion. Two-thirds of the academics in our sample had gone through at least two internal promotions, whereas only 18% had been appointed at their current level from outside the university.
Even within the same university one might still expect variance in individual academics’ hIa indices as publication and citation metrics are not the only criteria for promotion, and promotion criteria only constitute minimum standards. However, the hIa-index should remove much of the variance that is attributable simply to disciplinary and career length differences.
Metrics: citations, h-index, and hIa
In my analyses I compare three distinct metrics. The first and simplest metric is the total number of citations to an academic’s oeuvre. Second, the h-index is the best-known and most-used “second order” metric that combines publications and citations. It is defined as follows.
A scientist has an index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.
The h-index can be “eye-balled” easily by sorting an academic’s papers by the number of citations received and “counting up” papers until the count lies above the number of citations of the paper. The h-index is the number counted just before that. The h-index combines an assessment of quantity (number of papers) and an approximation of quality (impact, or citations to these papers). An academic cannot have a high h-index without publishing a substantial number of papers. However, this is not enough. These papers need to be cited in order to count for the h-index.
As such the h-index is said to be preferable over the total number of citations as it corrects for “one hit wonders”, i.e. academics who might have authored (or co-authored) one or a limited number of highly cited papers, but have not shown a sustained and durable academic performance. It is also preferable over the number of papers as it corrects for academic who write many papers that are not or rarely cited. Hence the h-index favours academics that publish a continuous stream of papers with lasting and above-average impact.
A shortcoming of the h-index is that it cannot be used to compare academics at different career stages or academics that work in different disciplines. It is obvious that there will be large differences between junior and senior academics in terms of the h-index as papers of junior academics have not yet had enough time to accumulate citations. Especially in the Social Sciences and Humanities it might take more than five years before a paper acquires a significant number of citations.
However, it might not be obvious to all readers that there are also large differences in typical h-values between disciplines. These differences are caused by the fact that academics in the Life Sciences and Sciences typically publish more (and shorter) articles, and also publish with a large number of co-authors. Academics in the Social Sciences and Humanities typically published fewer (and longer) articles (or books) and publish with a smaller number of co-authors. Academics in Engineering typically fall between these two extremes.
Therefore, a metric that corrects for these career stage and disciplinary differences provides information that the h-index cannot deliver. Harzing, Alakangas & Adams (2014) introduced such a metric, namely the hI,annual (or hIa for short). This index corrects the h-index for both career length and differences in the level of co-authorship and is one of the standard metrics reported by the free Publish or Perish software. It is calculated as follows:
hIa: hI norm/academic age, where:
- hInorm: normalize the number of citations for each paper by dividing the number of citations by the number of authors for that paper, and then calculate the h-index of the normalized citation counts
- academic age: number of years elapsed since first (significant) publication
The hIa-index thus measures the average number of single-author equivalent h-index points that an academic has accumulated in each year of their academic career. A hIa of 1.0 means that an academic has consistently published one article per year that, when corrected for the number of co-authors, has accumulated enough citations to be included in the h-index. Someone who co-publishes with others will not need to publish more articles to achieve the same hIa as an academic who publishes single-authored articles. However, the co-authored articles will need to gather more citations to become part of the hIa as the article’s citations will be divided by the number of co-authors.
As a metric, the hIa is slightly more complicated than the h-index and cannot be readily calculated by “eyeballing” the data. However, even if academics cannot replicate it quickly, replication is possible for everyone with basic numerical skills. Moreover, its principles, i.e. the division of a paper’s citations by the number of co-authors and the division of the resulting individual h-index by the number of years an academic has been active is easy to understand. As Franceschini & Maisano (2010: 495) argued “an indicator is successful not only if it is effective, but also if it is easily understood”.
Data cleaning
Many studies have shown Google Scholar to provide a much more comprehensive coverage across disciplines than commercial databases such as Scopus and the Web of Science. However, Google Scholar is not a bibliographic database, it finds publications by crawling websites that it deems to be academic. Errors in publication meta-data therefore do occur. Typically, these do not have much impact on the calculation of citation metrics.
However, the hIa is quite sensitive to an exact assessment of the academic age of an academic. In some cases, Google Scholar gets the year of publication wrong because of a parsing error. Moreover, for some academics Google Scholar may find a Master’s or PhD thesis, a repository paper, or other publications that are not considered to be refereed academic publications. Including these publications could in some cases dramatically increase the academic age of the academic in question, thus depressing their hIa. Hence, in our study every academic’s record was manually cleaned by sorting publications by year and removing any publications before the first “proper” academic publication.
Descriptive results
In this white paper, I am mainly interested in relative comparisons. However, some readers might be interested in the development of the absolute metrics over the years. Therefore, in the table below I present an overview of the three key metrics at four data points: July 2013, April 2017, December 2021, and November 2023. In addition, the average increase per year was calculated (taking the differential number of months between data points into account).
All three metrics display an increase over the years, but raw citations clearly increase more strongly than the h-index, which in turn shows a stronger increase than the hIa. This is to be expected. Increasing one’s h-index requires the additional citations to be evenly distributed over both newer and older work. Increasing this metric also becomes harder once it is at a high level as more citations are needed for additional publications to enter the h-index. As expected, the hIa is relatively stable over the years as any increase in the numerator (the individual h-index) is mitigated by an increase in the denominator (academic age).
The table above also illustrates that the yearly increase in citations has increased over the years. This is a general pattern in academic publishing, caused by the strong expansion of the number of research-active academics in the past 10 years. In contrast, the increase in h-index and hIa is quite stable over the years. Between 2013 and 2021 it was around 2 per year for the h-index and 0.015 to 0.02 for the hIa. This means that academics that exceed this level of growth could be considered top performers.
Note that – apart from citations – the growth in metrics between 2021 and 2023 is lower than expected based on prior years. The likely cause is that – after 10 years – some academics in our sample have retired or even passed away. Nearly a quarter have not published in the last year and more than 10% didn’t publish in the last two years. Moreover, as indicated above it is increasingly hard to increase the hIa in the later stages of an academic’s career. One needs to increase one’s individual h-index enough to counteract the ever-increasing academic age. Hence, even maintaining one’s hIa past mid-career is a significant achievement.
Unbiased metrics: disciplines, career stages, gender
Our sample consists of academics that are expected to be uniformly high performing, working as they do in a world top-40 university with strict promotion criteria. Therefore, although we can expect raw citations and the h-index to differ between disciplines and career stages (Associate or full Professor), we would expect the hIa to be similar across these categories. We would also expect each of the metrics to be similar across genders.
We are interested in relative comparisons across disciplines, career stages, and gender rather than absolute differences. Moreover, the three metrics are of a very different magnitude. The average h-index is more than 60 times as high as the average hIa. Average citations are more than 10,000 times as high. Thus, in order to be able to compare the performance of the three metrics we converted them to the same scale.
For the disciplinary comparison, we set the reference group that had the highest metrics (Life Sciences) to 100 and compared the other disciplines to this. To compare Associate Professors with full Professors and female with male academics, we likewise set the reference group that had the highest metrics to 100. As this was a comparison of two categories only, we only report the non-reference category (Associate Professors and female academics).
Disciplinary differences
As is clearly evident from Figure 1, there are considerable disciplinary differences for citations and the h-index. For citations, the Humanities average is barely a fifth of the Life Sciences average, for the h-index it is just under 40%. For the Social Sciences average citations are less than half of the Life Sciences average and the h-index runs to only slightly more than half. The difference for Engineering and the Sciences is less stark, but still their performance never reaches more than three quarters of the Life Sciences.
Figure 1: Citations, h-index and hIa across five disciplinary areas (2023). Life Sciences = 100
The picture is very different for the hIa, however. Four of the five disciplines are substantively similar, with the Social Sciences even exceeding the Life Sciences average. This unexpected result is caused mainly by the fact that the two highest performing academics on this metric –both with a hIa of around 2.0, i.e. nearly three times the average hIa – are working in the Social Sciences.
However, even without these two academics, the Social Sciences average sits at 96, i.e. is very similar to the Life Sciences. The average hIa for the Humanities scholars is only 21-37% lower than for the four other disciplines. Hence, the hIa presents a fairer comparison across the five disciplines and can be accepted as an ecologically rational heuristic in cross-disciplinary comparisons.
Career stage differences
Citation metrics will obviously differ between career stages too. One cannot expect a junior academic to have the same citation metrics as someone who is a full Professor. Therefore, assessing whether someone is performing well for their career stage is often hard to do. The hIa corrects not just for disciplinary differences, but also for differences in career length. We would therefore expect those who started out as Associate Professors in 2013 – and on average had started publishing 7 years later – to do better on the hIa than on the other metrics.
As Figure 2 shows this is indeed the case. Those academics who started out as Associate Professor at the start of our data collection on average reported well under half of the citations of full Professors and roughly two thirds of their h-index. However, their average hIa is much closer, reaching more than 80% of the hIa of full Professors.
When ranking our full sample on the three metrics in 2023, only one Associate Professor (in the Life Sciences) makes it to the top-10 for total citations and the h-index. In contrast, the ranking by hIa includes no less than five Associate Professors in the top-10. In addition to the same Life Sciences academic, this includes three academics in the Social Sciences and one in the Sciences.
Incidentally, by 2023 all of these five former Associate Professor had been promoted externally to full Professor (for details see next main section). Therefore, the hIa can be accepted as an ecologically rational heuristic in comparisons across career stages.
Figure 2: Citations, h-index and hIa across career stages & gender (2023). Professor/Male set at 100.
Gender differences
Metrics are often thought to disadvantage female academics. There are various reasons for this, including the fact that men tend to self-cite their own work more and have a higher likelihood to cite other men. Moreover, women often publish with fewer co-authors. Typically, the number of co-authors is positively related to citations. Hence, this may result in lower citation levels for women.
In our sample we find that male and female academics have a relatively similar performance on all metrics. However, on average citations and the h-index are still 10% lower for female academics. The hIa in contrast is nearly identical for both genders. There are two reasons for this. First, the women academics in our sample on average had a slightly lower academic age (2.7 years shorter) and thus have had less time to build up their citation profile. Second, they do indeed have slightly fewer co-authors.
When ranking our sample on citations, only three women made it to the top-10, based on h-index this is five, but based on hIa, this is six. The hIa can thus be accepted as an ecologically rational heuristic in comparisons across genders.
Predictive power: promotion, mobility, and awards
Ecological rationality not only concerns equitable comparisons across disciplines, career stages and gender, it can also be linked to key career related decisions in academia, such as promotion across the ranks, external mobility, and the reception of major awards. Metrics that can distinguish effectively between those that are promoted, are mobile, or are awarded, and those that are not can be said to have ecological rationality.
In this section we therefore look at the predictive power of the various metrics, reviewing whether they can be used to predict promotion, external mobility, and the allocation of awards. As in the previous section, we are interested in relative comparisons between academics that have (not) been promoted, have (not) been mobile, and have (not) been awarded. Hence, in each case we set the “no change” base group to 100 and compared the “change” group with this. To establish the predictive power of the various metrics we compare the 2013 metrics for our sample with the 2023 metrics.
Are those promoted to full Professor outperforming the other Aspros?
Most research-intensive universities only promote someone to Associate Professor if the expectation is that the applicant will continue to excel and – in due course – will be promoted to Full Professor. Indeed, just over two thirds of the academics who started out as Associate Professors at the start of our data collection period in 2013 had been promoted to full professor by the end of our data collection period in 2023.
We would expect those that were promoted to outperform those not promoted. This was indeed the case (see Figure 3). In 2023 all three metrics do seem to discriminate effectively between those promoted and those not promoted with citations 89% higher, the h-index 40% higher and the hIa 61% higher for those who were promoted.
Figure 3: Citations, h-index, hIa and promotion between 2013 and 2023. Non-promotion set to 100.
What is striking, however, is that in 2013 only the hIa differed substantively between those Associate Professors who were eventually promoted and those who were not. Citations and the h-index only differed by 5-7% between the two groups, whereas the hIa was nearly 40% higher for the group that was ultimately promoted.
Looking at the other side of the coin, only two of the academics who are still at the Associate Professor level (8%) have a hIa higher than the average of the group of academics who were promoted. In contrast, only six of those who were promoted to full Professor (12%) had a lower hIa than the average of the group of academics who were not promoted.
The number of publications and citations are only one aspect of academic performance considered in promotion applications. In the research area, funding is an important additional metric, especially in the Life Sciences (both non-promoted academics were in this field). Moreover, even in research-intensive universities, outstanding performance in teaching, external engagement, and leadership may compensate for a slightly lower level of research performance. Hence, the hIa appears to have considerable predictive power for eventual promotion.
Moreover, in 2023 the hIa of those who were promoted to full Professor was very close to that of those who started the data collection period as full Professor (0.71 vs 0.76). In contrast, citation levels and h-index for newly minted Professors were still substantively lower (6302 vs 12761 and 36.8 vs 51.3). This is only natural as these metrics are heavily influenced by academic age, which for newly minted Professors was on average 10 years younger. All in all, the hIa can thus be considered an ecologically rational heuristic in promotion decisions.
Are high-performing academics more likely to be mobile?
High-performing academics are more likely to change institutions for a variety of push and pull factors. The University of Melbourne, however, has long been Australia’s top-ranked institution and is ranked in the top-40 worldwide. Moreover, Melbourne is regularly ranked as one of the most liveable cities in the world.
Hence, working at this institution has often been compared to a “golden cage” and average turnover is very low, especially at senior levels. Only twenty academics (i.e. less than one in seven) had left the institution by 2023. Twelve of those were Associate Professors and eleven of them were promoted upon changing jobs.
In our sample mobile academics do indeed tend to be high performers. Out of the top-10 academics with the highest hIa in 2023, eight had moved institutions. This was true for only three out of the top-10 performers on citations and two of the top-10 performers on the h-index. More generally, as Figure 4 shows those that moved institutions outperform those who stayed put on all three metrics.
Figure 4: Citations, h-index, hIa and mobility between 2013 and 2023. Non-mobile set to 100.
In parallel with our findings for promotion, however, in 2013 only the hIa differed substantively between those who went on to leave the institution and those who stayed. Whereas the difference between mobile and non-mobile academics was only 11% for citations and 2% for the h-index, it was 50% for the hIa.
In 2023, the other metrics had caught up; the difference between the mobile and non-mobile group was even larger for citations than for the hIa. However, as with promotion, the hIa has the highest predictive power for mobility and can thus be considered an ecologically rational heuristic to predict who is likely to leave an institution.
There was considerable variety in turnover across disciplines, however. None of the academics in Engineering left the University of Melbourne, whereas this was the case for one in ten in the Sciences, one in seven in the Life Sciences, one in five in the Humanities, and no less than one in four in the Social Sciences (two thirds of which were in Business).
Given the small numbers involved, firm conclusions are impossible. However, as Research Dean for the Faculty of Economics & Business at the University of Melbourne, I regularly crossed swords with our Life Scientist Pro-VC Research. He was convinced our Faculty was underperforming and demanded we double our publication metrics. This was even though we ranked in the top-20 worldwide and doubling our publication output would mean leaving Harvard University at #1 well behind us 😊.
I therefore venture the speculation that a lack of discipline-corrected metrics might have led to underestimating the performance of academics in the Humanities & Social Sciences and thus resulting in higher levels of turnover in these disciplines. Hence, universities who would like to retain their high-performing academics in the Social Sciences & Humanities might do well to use the hIa as an ecologically rational heuristic, rather than castigate these academics for their “low” publication and citation levels.
Are high-performing academics more likely to receive awards?
Research awards can be expected to be concentrated among high-performing academics. Awards in academia come in many forms and shapes, including best paper awards and Fellowships. Unfortunately, I do not have access to this type of information for our sample. However, the University of Melbourne has a system of Distinguished Professorships (Redmond Barry and Melbourne Laureate Professors) awarded to those pre-eminent in research.
Thirteen professors in our sample were acknowledged in this way. As is clear from Figure 5, these professors did outperform those who were not awarded on all three metrics, both in 2013 and in 2023. However, unlike for promotion and mobility, the hIa had the lowest predictive power for this type of awards, with the total level of citations being a much more significant indicator.
Figure 5: Citations, h-index, hIa and awards between 2013 and 2023. Not awarded set to 100.
Given that this type of distinction is usually given in the later stages of one’s career, this is only natural as citations are the only metric that will continue to show an upward trajectory. As we discussed above, the h-index is hard to increase beyond mid-career and with advancing years the hIa may even decline in later career stages. Academics who had received this distinction on average had an academic age of 37, i.e. had started publishing 37 years ago. Hence, in terms of lifetime awards the total number of citations is the most ecologically rational heuristic.
Sensitivity checks: alternative metric and data source
A disadvantage of the hIa is that it is quite sensitive to the correct assessment of the “academic age”, i.e. the year in which an academic first started publishing. Especially when using Google Scholar, publication data can be quite “noisy” requiring extensive data cleaning (see above). This could be prevented if academics made a concerted effort to curate their Google Scholar Profile, corrected inaccurate publication dates, and removed early non-refereed publications from their profile.
However, an alternative would be to use an individualized version of the recently introduced ha-index (Fassin, 2024), which is the largest number of papers ha published by a researcher that have obtained at least ha citations per year on average. This metric could be individualized by dividing it through an author’s average number of co-authors.
Thus, this effectively reverses the age and authorship correction: age (of papers) first then authorship, whereas the hIa first corrects for authorship and then for age (of the academic). Re-running the analyses with the ha, however, provided substantively similar results, although the Social Sciences and Humanities did even better with this index, now outperforming the three other disciplines.
With some caveats, Google Scholar has been accepted as a useful data source for bibliometric research; to date nearly 8,000 publications have used the PoP software for this purpose. However, as a sensitivity check, I conducted the same analyses using the more structured Microsoft Academic data (see also: Microsoft Academic is one year old: the Phoenix is ready to leave the nest). I was last able to do this in 2021 as Microsoft discontinued its offering from January 2022. On average Microsoft Academic citations amounted to 97% of the Google Scholar citations and the resulting metrics were almost identical to GS-based metrics.
Conclusion
In this white paper I explored the usefulness of the individual annualised h-index (hIa for short) as an ecologically rational heuristic. I showed that this metric is unbiased across disciplines, career stages, and gender, and has substantial predictive power for typical decision points in academic careers, such as promotion and mobility.
For the allocation of awards, however, citations appear to be most ecologically rational metric. Although metrics should never be the only tool in the box, the hIa does appear to have considerable promise as an ecologically rational heuristic, and is an effective and efficient tool in the research evaluation toolbox.
References
- Bornmann, L., Ganser, C., & Tekles, A. (2022). Simulation of the h index use at university departments within the bibliometrics-based heuristics framework: Can the indicator be used to compare individual researchers?. Journal of Informetrics, 16(1), 101237.
- Fassin, Y. (2023). The ha-index, the average citation h-index. Quantitative Science Studies, 4(3), 756–777.
- Franceschini, F., & Maisano, D. A. (2010). Analysis of the Hirsch index’s operational properties. European Journal of Operational Research, 203(2), 494-504
- Harzing, A. W., Alakangas, S., & Adams, D. (2014). hIa: An individual annual h-index to accommodate disciplinary and career length differences. Scientometrics, 99(3), 811-821.
- Harzing, A.W.; Alakangas, S. (2016) Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison, Scientometrics, 106(2), 787-804.
Copyright © 2024 Anne-Wil Harzing. All rights reserved. Page last modified on Sun 19 May 2024 11:22
Anne-Wil Harzing is Emerita Professor of International Management at Middlesex University, London. She is a Fellow of the Academy of International Business, a select group of distinguished AIB members who are recognized for their outstanding contributions to the scholarly development of the field of international business. In addition to her academic duties, she also maintains the Journal Quality List and is the driving force behind the popular Publish or Perish software program.