Reflections on the h-index
© Copyright 2007-2008 Anne-Wil Harzing. All rights reserved.
Third version, 23 April 2008
Since Hirsch's first publication of the h-index in 2005 (Hirsch, 2005), this new measure of academic impact has generated a widespread interest.
The h-index is defined as follows:
A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.
It aims to measure the cumulative impact of a researchers output by looking at the amount of citations his/her work has received. Hirsch argues that the h-index is preferable to other single-number criteria, such as the total number of papers, the total number of citations and citations per paper. However, Hirsch provides a strong caveat:
Obviously a single number can never give more than a rough approximation to an individuals multifaceted profile, and many other factors should be considered in combination in evaluating an individual. This and the fact that there can always be exceptions to rules should be kept in mind especially in life-changing decision such as the granting or denying of tenure.
The advantage of the h-index is that it combines an assessment of both quantity (number of papers) and quality (impact, or citations to these papers) (Glänzel, 2006). An academic cannot have a high h-index without publishing a substantial number of papers. However, this is not enough. These papers need to be cited by other academics in order to count for the h-index.
As such the h-index is said to be preferable over the total number of citations as it corrects for one hit wonders, i.e. academics who might have authored (or co-authored) one or a limited number of highly-cited papers, but have not shown a sustained and durable academic performance. It is also preferable over the number of papers as it corrects for papers that are not cited. Hence the h-index favours academics that publish a continuous stream of papers with lasting and above-average impact. (Bornmann & Daniel, forth).
The h-index has been found to have considerable face validity. Hirsch calculated the h-index of Nobel price winners and found 84% of them to have an h-index of at least 30. Newly elected members in the National Academy of Sciences in Physics and Astronomy in 2005 had a median h-index of 46.
Bornmann & Daniel (2005) found that on average the h-index for successful applications for postdoctoral research fellowships was consistently higher than for non-successful applicants. Cronin & Meho (2006) found that faculty rankings in information sciences based on raw citation counts and on the h-index showed a strong positive correlation, but claim that the h-index provides additional discriminatory power. Van Raan (2006) calculated the h-index for 147 chemistry research groups in the Netherlands and found a correlation of 0.89 between the h-index and the total number of citations. Both the h-index and more traditional bibliometric indices also related in a quite comparable way with peer judgements.
Finally, maybe the strongest indication that the h-index is becoming a generally accepted measure of academic achievement is that ISI Thomson has now included it as part of its new citation report feature in the Web of Science. There are however, several disadvantages to this implementation:
- It does not include citations to the same work that have small mistakes in their referencing (of which for some publications there are many);
- It only includes citation to journal articles (not to books, book chapters, working papers, reports, conference papers, etc.);
- It only includes citations in journals that are listed in the ISI Thomson database, which especially for the Social Sciences and Humanities only includes a small proportion of academic journals in the field.
For a more detailed analysis of the advantages and disadvantages of the ISI Thomson Web of Science in comparison to alternatives such as Google Scholar, see the separate paper Google Scholar - a new data source for citation analysis.
The h-index is a less appropriate measure of academic achievement for junior academics, as their papers have not yet had the time to accumulate citations. Especially in the social sciences it might take more than five years before a paper acquires a significant number of citations. For junior academics, the impact factor of the journal they publish in might be a more realistic measure of eventual impact.
However, the h-index should provide a more realistic assessment of the academic achievement of academics who have started publishing at least 10 years ago. I would argue that for more senior academics, assessing the impact of their own publications is preferable to assessing the impact of the journals they publish in.
One way to facilitate comparisons between academics with different lengths of academic careers is to divide the h-index by the number of years the academic has been active (measured as the number of years since the first published paper). Hirsch (2005) proposed this measure and called it m.
However, we should note that m generally does not stabilise until later in ones career and that for junior researchers (with low h-indices) small changes in the h-index can lead to large changes in m. In addition, as Hirsch indicates, the first paper may not always be the appropriate starting point, especially if it was a minor contribution that was published well before the academic realised a sustained productivity. Moreover, m discriminates against academics that work part-time or have had career interruptions (generally women). However, in some cases m might be a useful additional metric to evaluate an academics achievement.
The h-index ignores the number of citations to each individual article beyond what is needed to achieve a certain h-index. Hence an academic with an h-index of 5 could theoretically have a total of 25 citations (5 for each paper), but could also have more than a 1000 citations (4 papers with 250 citations each and one paper with 5 citations).
In reality these extremes will be unlikely. However, once a paper belongs to the top h papers, its subsequent citations no longer count. Such a paper can double or triple its citations without influencing the h-index (Egghe, 2006). Hence, in order to give more weight to highly-cited articles Leo Egghe (2006) proposed the g-index. The g-index is defined as follows:
[Given a set of articles] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations.
Although the g-index has not yet attracted much attention or empirical verification, it would seem to be a very useful complement to the h-index.
A disadvantage of the h-index is that it cannot decline. That means that academics who retire after 10-20 active years of publishing maintain their high h-index even if they never publish another paper.
In order to address this issue, Sidiropoulos et al. (SKM) (2006) propose the contemporary h-index. The contemporary h-index adds an age-related weighting to each cited article, giving (by default; this depends on the parametrization) less weight to older articles. The weighting is parametrized; the Publish or Perish implementation uses gamma=4 and delta=1, like the authors did for their experiments. This means that for an article published during the current year, its citations account four times. For an article published 4 years ago, its citations account only one time. For an article published 6 years ago, its citations account 4/6 times, and so on.
For junior academics the contemporary h-index is generally close to their regular h-index as most of the papers included in their h-index will be recent. For more established academics there can be a substantial difference between the two indices, indicating that most of the papers included in their h-index have been published some time ago. As such the contemporary h-index often provides a slightly fairer comparison between junior and senior academics than the regular h-index.
Hirsch (2005) indicates that there will be large differences in typical h-values in different fields. Academic disciplines differ in the average number of references per paper and the average number of papers published by each academic.
As a general rule of thumb h-indices are much higher in the Natural Sciences than in the Social Sciences and Humanities, although there is a large variability even within these fields.
Podlubny (Podlubny, 2005 and Podlubny and Kassayova, 2006) showed that for nine broadly defined disciplines the average ratio of total citations to the number of citations in mathematics varied considerably (Mathematics: 1, Engineering/technology: 5, Biology: 8, Earth/space sciences: 9, Social/behavioral sciences: 13, Chemistry: 15, Physics: 19, Biomedical Research: 78, Clinical Medicine: 78).
Similarly, Iglesias & Pecharroman (2006) calculated the average number of citations/paper in the 21 different ISI fields and used this to design a normalisation factor. Unfortunately, the discipline areas used in neither studies map closely enough onto the categories used by Google Scholar to use these normalisation factors in Publish or Perish. However, they do show that comparisons of bibliometric data across fields are generally inappropriate.
Part of the differences between disciplines are caused by the fact that academics in the Natural Sciences typically publish more (and often shorter) articles and also publish with a large number of co-authors, while academics in the Social Sciences and Humanities typically published fewer (and longer) articles (or books) and publish with a smaller number of co-authors.
However, differences in the number of co-authors also seem apparent within the same discipline. For instance, North American academics tend to publish articles with a larger number of co-authors than European academics. Since 1990, papers in the North-American Academy of Management Journal on average have 2.24 authors, papers in the British Journal of Management 2.01 authors, and papers in the European Management Journal 1.84 authors.
Hirsch (2005) suggested that in the case of large differences in the number of co-authors, it might be useful to normalise the h-index by a factor that reflects the average number of co-authors.
Batista et al. (2006) suggest that since papers with more authors generally receive more (self)citations and since co-authorship behaviour is characteristic of individual disciplines, the individual h-index might serve to quantify an individuals scientific output across disciplines by indicating the number of papers an academic would have written with at least hI citations if he/she had worked alone. They show that the average h-index for top Brazilian academics in physics, chemistry, biology/biomedical and mathematics varies significantly from 32-37 for Physics to 9-14 for Mathematics. When adjusted for co-authorships the variability is significantly reduced with hI-indices of 6.7-14.8 for Physics and 4-10.3 for Mathematics.
Batista et al.s calculation adjusts the original h-index by dividing it by the mean number of researchers in the h publications. Publish or Perish also implements an alternative individual h-index that takes a different approach: instead of dividing the total h-index, it first normalizes the number of citations for each paper by dividing the number of citations by the number of authors for that paper, then calculates the h-index of the normalized citation counts. This approach is much more fine-grained than Batista et al.'s; we believe that it more accurately accounts for any co-authorship effects that might be present and that it is a better approximation of the per-author impact, which is what the original h-index set out to provide.
- Batista, P.D.; Campiteli, M.G.; Konouchi, O.; Martinez, A.S. (2006) Is it possible to compare researchers with different scientific interests? Scientometrics, vol. 68, no. 1, pp. 179-189.
- Bornmann, L.; Daniel, H-D. (2005) Does the h-index for ranking of scientists really work? Scientometrics, vol. 65, no. 3, pp. 391-392.
- Bornmann, L.; Daniel, H-D. (forth) What do we know about the h index? Journal of the American Society for Information Science and Technology.
- Cronin, B.; Meho, L. (2006) Using the h-Index to Rank Influential Information Scientists, Journal of the American Association for Information Science and Technology, vol. 57, no. 9, pp. 1275-1278.
- Egghe, L.; Rousseau R. (2006) An informetric model for the Hirsch-index, Scientometrics, vol. 69, no. 1, pp. 121-129.
- Egghe, L. (2006) Theory and practice of the g-index, Scientometrics, vol. 69, No 1, pp. 131-152.
- Glänzel, W. (2006) On the opportunities and limitations of the H-index, Science Focus, vol. 1 (1), pp. 10-11.
- Hirsch, J.E. (2005) An index to quantify an individual's scientific research output, arXiv:physics/0508025 v5 29 Sep 2006.
- Iglesias, J.E, Pecharromán, C. (2006) Scaling the h-Index for Different Scientific ISI Fields, arXiv:physics/0607224
- Podlubny, I. (2005) Comparison of scientific impact expressed by the number of citations in different fields of science, Scientometrics, vol. 64, no. 1, pp. 95-99.
- Podlubny, I.; Kassayova, K. (2006) Law of the constant ratio. Towards a better list of citation superstars: compiling a multidisciplinary list of highly cited researchers, Research Evaluation, vol. 15, no. 3, pp. 154-162.
- Sidiropoulos, A.; Katsaros, C.; Manolopoulos, Y. (2006) Generalized h-index for disclosing latent facts in citation networks, arXiv:cs.DL/0607066 v1 13 Jul 2006.
- Raan, A.F.J. van (2006) Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgement for 147 chemistry research groups, Scientometrics, vol 67, no. 3, pp. 491-502.
- Publish or Perish
- Publish or Perish Tutorial
- The Publish or Perish Book
- Google Scholar - a new data source for citation analysis
- Reflections on norms for the h-index and related indices
- Google Scholar: the democratization of citation analysis?
- A Google Scholar h-Index for Journals
- Working with ISI data: Beware of Categorisation Problems
- Citation analysis across disciplines
Copyright © 2017 Anne-Wil Harzing. All rights reserved. Page last modified on Sat 20 May 2017 18:49
Anne-Wil Harzing is Professor of International Management at Middlesex University, London. In addition to her academic duties, she also maintains the Journal Quality List and is the driving force behind the popular Publish or Perish software program.