Comparing the Google Scholar h-index with the ISI Journal Impact Factor

Proposes an alternative metric – Hirsch’s h-index – and data source – Google Scholar – to assess journal impact.

Prof. Anne-Wil Harzing, University of Melbourne
Dr. Ron van der Wal, Tarma Software Research

© Copyright 2007-2008 Anne-Wil Harzing and Ron van der Wal. All rights reserved.

Second version, 20 December 2008

A condensed version of this paper was published as:

  • Harzing, A.W.; Wal, R. van der (2009) A Google Scholar h-index for journals: An alternative metric to measure journal impact in Economics & Business?, Journal of the American Society for Information Science and Technology, vol. 60, no. 1, pp 41-46. Available online...

Abstract

Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF).

This paper proposes an alternative metric – Hirsch’s h-index – and data source – Google Scholar – to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.

Introduction

Media rankings of Business Schools – such as the yearly BusinessWeek and Times Higher Education rankings of MBA programmes - have been identified as one of the key forces in the present academic landscape (see e.g. Gioia & Corley, 2002; Morgeson & Nahrgang, 2008; Segalla, 2008).

These business school rankings are only based in very small measure (typically 10-20%) on the quality of research conducted. Even so, some business schools have started to provide financial incentives to staff for publication in the list of journals considered in these rankings (see e.g. McDonald & Kam, 2007). Moreover, other type of rankings of academics institutions, conducted by institutes such as Shanghai Jiao Tong Institute of Higher Education (SJTUIHE) and the Center for Science and Technology Studies (CWTS) at Leiden University focus nearly entirely on research output, typically measuring publications and citations in a limited set of journals. As a result, rankings of journals have become increasingly important.

At an individual level disseminating new knowledge through journal publication is usually one of the key criteria for appointment, tenure and promotion in universities (Bailey 2002). Moreover, as indicated above many universities now offer significant financial rewards for publication in top journals. Hence, journal publication and assessment of journal quality is important not just for universities, but also for individual academics.

Although the creation of rankings of academic journals is common practice in many fields of research, the activity is not without contention or critique (see e.g. McDonald & Kam, 2007). Whilst recognising and sympathising with this position, the present paper takes a pragmatic stance and reasons that as long as journal rankings are considered to be part of academic life, it is important to ensure that they are as comprehensive and objective as possible.

In general we can distinguish two broad approaches to ranking journals: stated preference (or peer review) and revealed preference (Tahai & Meyer, 1999). Stated preference involves members of particular academic community ranking journals on the basis of their own expert judgements. These are often undertaken by particular universities or departments in order to help make decisions about, for example, library budgets, promotion or tenure, and national research evaluations, such as the Research Assessment Exercise in the UK.

As a result there are hundreds of individual university journal rankings in existence and integrated journal ranking lists have sprung up that combine a range of rankings (see e.g. the British ABS Journal Quality Guide (ABS, 2007) and Harzing’s Journal Quality List (Harzing, 2007). Opinions might be based on anything from a large-scale worldwide survey of academics to a small group of individuals with decision-making power, but will always contain some element of subjectivity (see also Groot & Garcia-Valderrama (2006) for a summary of the problems associated with peer review methods).

Revealed preference rankings are based on actual publication behaviour and generally measure the citation rates of journals using Thomson ISI’s Web of Knowledge. Most commonly used are the ISI Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). The JIF is defined as the mean number of citations received in a particular year to articles published in the journal in the preceding two years. It is calculated by dividing the number of citations to articles published in that journal in the previous two years by the number of articles published in these two years. As the selection of article titles in Table 1 shows, this statistic is by no means undisputed.

Table 1: Selection of article titles dealing with the Journal Impact Factor
  • Why the impact factor of journals should not be used for evaluating research. (Seglen, 1997)
  • Sense and nonsense of science citation analyses: comments on the monopoly position of ISI and citation inaccuracies. (Reedijk, 1998)
  • Citation analysis and journal impact factors – is the tail wagging the dog? (Gisvold, 1999)
  • The citation impact factor in social psychology: a bad statistic that encourages bad science? (McGarty, 2000)
  • The impact factor: time for change (Bloch & Walter, 2001).
  • Impact factors: facts and myths (Whitehouse, 2002)
  • Trends in the Usage of ISI Bibliometric Data: Uses, Abuses, and Implications (Cameron, 2005).
  • The Journal Impact Factor: Too Much of an Impact? (Ha, Tan & Soo, 2006)

Mingers & Harzing (2007) show that there is a fairly high degree of correlation between journal rankings based on stated and revealed preferenceNote 1. However, as Tahai & Meyer (1999) point out, stated preference studies have long memories and perceptions of journals normally change only very slowly in these rankings. As such, revealed preference studies provide a fairer assessment of new journals or journals that have only recently improved their standing. Hence revealed preference studies could be argued to present a more accurate picture of journal impact.

Note 1 - This is also generally found to be true for evaluation of research groups. In a study comparing bibliometric analysis and peer review for six economics research groups in the Netherlands, Nederhof and van Raan (1993) found the two methods to be complementary and mutually supportive. In a later study of 56 research programmes in condensed matter physics in the Netherlands Rinia et al. (1998) found a high correlation between peer judgment and the average number of citations per paper.

However, even though there have been quite a number of revealed preference studies in the field of Economics and Business (see e.g. Tahai & Meyer, 1999; Dubois & Reeb, 2000; Baumgartner & Pieter, 2003) these studies focused on either a very limited group of top journals or on a small group of journal in a particular sub-discipline. In this paper, we therefore conduct a revealed preference study for more than 800 journals in the broad field of Economics & Business.

In doing so, we also introduce new citation metrics and a new source of data to accommodate the critique levelled at the Thomson ISI Journal Impact Factors. In terms of the first, we used the recently introduced h-index (Hirsch, 2005) and g-index (Egghe, 2006) to measure journal impact rather than the ISI Journal Impact Factor (JIF). In terms of the second, we used Google Scholar instead of the ISI Web of Knowledge as our data source. Our article provides a detailed benchmarking exercise of both the new metrics and the new data source against the ISI Journal Impact Factor (JIF).

The remainder of this paper is structured as follows. First, we discuss our choice of data source and metrics in more detail and provide information about the methods we used to collect our data. Then we present the results of our benchmarking exercise, first at an aggregate level and then at the level of four sub-disciplines. We also provide an analysis of the h-index of journals not covered in the ISI JCR. We conclude that in the field of Economics and Business, the Google Scholar based h-index provides a credible alternative for ranking journals. However, we do caution readers that when evaluating individual academics’ research output, using journal impact as a proxy has significant limitations, and hence additional measures of impact are in order.

Data source and metrics

ISI Web of Knowledge versus Google Scholar

In this paper we use Google Scholar as a data source instead of the ISI Web of Knowledge. There are several papers that provide good overviews of the problems associated with the use of the ISI Web of Knowledge as a data source (e.g. Seglen, 1997; Cameron, 2005). Most of these problems revolve around ISI’s limited coverage, especially in the Social Sciences and Humanities.

Previous studies have highlighted problems such as: the lack of coverage of citations in books, conference and working papers as well as citations in journals not included in ISI; the lack of inclusion of journals in languages other than English in the ISI database; and the US bias in the journals included in the database. Poor aggregation of citations to minor variations of the same title is also listed as a disadvantage of the ISI Social Science Citation Index (Reedijk, 1998; Harzing & van der Wal (2008)).

Harzing & van der Wal (2008) provide a fairly detailed comparison of ISI Web of Knowledge and Google Scholar and also conclude that lack of coverage is ISI’s main problem. In one of their comparisons they assess Pfeffer & Fong’s (2002) claim that even the best academic books (those winning the prestigious Terry Book award) only receive a very limited number of citations (around 7 citations a year). Harzing & van der Wal show that when using Google Scholar as a source for citation analysis, these books on average received 68 citations per year, which can hardly be considered to evidence low impactNote 2.

Note 2 - It must be noted that even when Harzing & van der Wal reproduced Pfeffer & Fong’s (2002) using only ISI citations, they came up with a far higher number of yearly citations (28 versus 7). The authors attribute this to the passage of time (the analyses were conducted five years apart) and the possibility that Pfeffer & Fong did not systematically include misspellings, appearances with different author initials or citations to separate book chapters published within an edited award winning book.

A recent comparison of the h-indices of UK based academics in LIS and IR (Sanderson, 2008) found significantly higher h-indices when using Google Scholar in comparison to the ISI Web of Science. The author claims that false positive errors, though present in GS, were not found to be as important as the substantial number of false negative errors in WoS […] (Sanderson, 2008: 1189).

Based on a sample of 1650 articles Kousha & Thelwall (2007, 2008) found that although Google Scholar coverage was less comprehensive than ISI in the three Science disciplines included in their study (Biology, Chemistry and Physics), Google Scholar coverage for the four Social Sciences included in their study (Education, Economics, Sociology and Psychology) as well as Computing was significantly higher than ISI coverage. Overall, there seems to be considerable agreement that Google Scholar is a worthwhile alternative to ISI in the Social Sciences and the Information Sciences.

Disadvantages of Google Scholar are its inclusion of non-scholarly citations, double counting of citations, less frequent updating, uneven coverage across disciplines and less comprehensive coverage of older publications/citations. Harzing & van der Wal (2008) argue that the problem of non-scholarly citations and double counting is fairly limited and attenuated by the use of robust citation metrics such as the h-index (see below).

In a similar vein, Vaughan and Shaw (2008) argue that 92% of the citations identified by Google Scholar in the field of library and information science represented intellectual impact, primarily citations from journal articles. Meho & Yang (2007) also found that most of the citations uniquely found by Google Scholar are from refereed sources. The last three limitations are not relevant for the analyses presented in this paper as we focus on a discipline that has good Google Scholar coverage and on citations to papers between 2000-2005.

Citation Metrics used

Problems associated with the Journal Impact Factor

There are several commonly mentioned problems (see e.g. Seglen, 1997; Cameron, 2005 for a summary) with the use of the ISI Journal Impact Factor, the most important of which are the use of a 2-year citation window and technical issues related to the calculation of the JIF.

When JIFs were introduced by Garfield in the 1960s, his focus was on biochemistry and molecular biology, disciplines that are characterised by a high number of citations and short publication lags (Cameron, 2005). Hence the use of a 2-year citation window might have been justified. However, this is not true for most other disciplines where knowledge takes much longer to be disseminated.

As Leydesdorff (2008) shows, impact factors can differ with an order of magnitude when comparing across disciplines such as Mathematics and Genetics. McGarty (2000) discusses the problems associated with the 2-year JIF for the Social Sciences in some detail. He shows that the publication lags for two important Psychology journals are such that for a typical paper published in these journals, two thirds of the literature that could theoretically be included in the JIF (i.e. papers published in the two years preceding publication of the referencing paper) was yet to be published at the time of submission.

A perusal of the last issue of 2007 of the Journal of International Business Studies shows that the problem is at least as severe in Business Studies. Even in this most optimistic case (i.e. the final issue of 2007) we find very few references to publications in 2005 and 2006 in the ten articles published in this issue. Out of the more than 700 references in this issue, only 20 referred to publications in 2005 and a mere 7 to publications in 2006 (i.e. less than 4% of the total number of citations). One third of these citations were self-citations. This is not entirely surprising given that of the ten papers in this issue, six were submitted before 2005 (four in 2004, one in 2003, one in 2002). Of the remaining four, two were submitted in January and February 2005 and hence cannot realistically be expected to include references to 2005 papers. The final two papers were submitted in January 2006 and May 2006. Hence a full 86% of the literature that could theoretically be included in the JIF was yet to be published at the time of submissionNote 3.

Note 3 - If a paper would be submitted in 2007, it would have access to 24 months of literature that could be included in the JIF. If we assume submission at the end of the months a paper submitted in May 2006 would have access to 17 months of literature, a paper submitted in January 2005 to one month of literature. Hence the total literature available to be cited at the time of submission is: 1 + 2 +13 +17 months = 33 months, divided by 240 months (10 papers) = 13.75%. Please note that this is a best-case scenario, as we assume submission at the end of the month and consideration of new references up until the very moment of submission.

The fact that we find any references to papers published in 2005 and 2006 in these articles is most likely due to these references being included in the review process. As McGarty (2000:14) aptly summarises:

The two year impact factor clearly favors journals which publish work by authors who cite their own forthcoming work and who are geographically situated to make their work readily available in preprint form. The measure punishes journals which publish the work of authors who do not have membership of these invisible colleges and is virtually incapable of detecting genuine impact. It is not just a bad measure, it is an invitation to do bad science.

In addition to the limitations associated with a 2-year window, there are several technical or statistical problems with the way the JIF is calculated. First, whilst the denominator in the JIF (the number of articles published) only includes normal articles (so called source items), the numerator includes citations to all publications in the journal in question, including editorials, letters, and book reviews (Cameron, 2005). This means that citations in these latter publications are basically free as the increase in the numerator is not matched by an increase in the denominator. As a result, journals with a lively letters to the editor/correspondence section (such as for instance Nature) will show inflated JIFs.

This problem is compounded by the fact that many journals have increased the proportion of non-source items over time. Gowrishankar & Divakar (1999) indicate the proportion of source items to non-source items in Nature declined from 3.5 to 1.6 between 1977 and 1997. This particular JIF feature also enables manipulation of the JIF by unscrupulous editors who can inflate their JIF by referring frequently to journal articles or even to other editorials in their editorials (Whitehouse, 2002) or by requesting their authors to include spurious self-citations to the journal (Gowrishankar & Divakar, 1999). Bayliss, Gravenor & Kao (1999) argue that even a single research institute could increase the JIF of a journal that publishes few papers from 1 to 6 by asking each of its researchers to cite two papers in that journal.

The second calculation problem is statistical in nature: the JIF calculates the mean number of citations to an article in the journal in question. However, many authors have found that citation distributions are extremely skewed. Seglen (1997) for instance found the most cited 15% of papers to account for 50% of citations and the most cited 50% for 90% of the citations. Hence on average the most cited half of papers are cited nine times as much as the least cited half. Especially for journals publishing a relatively small number of papers, individual highly cited papers have a very strong influence on the mean JIF.

Alternative measures of journal impact: h-index and g-index

In this paper we use two relatively new citation metrics: the h-index and the g-index. The h-index was introduced by Hirsch (2005) and is defined as follows (Hirsch, 2005:1):

A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.

As a result the h-index provides a combination of both quantity (number of papers) and quality (impact, or citations to these papers) (Glänzel, 2006). Therefore, the h-index is preferable to simply measuring the total number of citations as it corrects for one hit wonders, i.e. academics who might have authored (or even be the 20th co-author of) one or a limited number of highly-cited papers, but have not shown an academic performance that has been sustained over a longer period of time.

The h-index is also preferable over a simple measurement of the number of papers published as it corrects for papers that are not cited and hence can be argued to have had limited impact on the field. In sum, the h-index favours academics that publish a continuous stream of papers with lasting and above-average impact (Bornmann & Daniel, 2007).

The h-index has resulted in a flurry of commentaries and articles published in journals such as Scientometrics and Journal of the American Society for Information Science and Technology, including several articles proposing further refinements (see e.g. Bornmann, Mutz & Daniel, 2008) and in spite of strong criticism by some bibliometricians, it has generally received a positive reception. Perhaps the strongest indication that the h-index is becoming a generally accepted measure of academic achievement is that ISI Thompson has now included it as part of its new citation report feature in the Web of Science.

Examples of the application of the h-index to journals are still scarce. To the best of our knowledge, only brief notes have been published (see e.g. Braun, Glänzel & Schubert (2005) and Saad (2006)) and no study has covered more than a limited subset of journals or provided a systematic comparison between different data sources and metrics. However, the arguments above would also apply to journals. We are interested in whether journals publish a continuous stream of papers with lasting and above-average impact and therefore the h-index could potentially be a very useful indicator of journal impact.

A disadvantage of the h-index is that it ignores the number of citations to each individual article over and above what is needed to achieve a certain h-index. Therefore an academic or journal with an h-index of 6 could theoretically have a total of 36 citations (6 for each paper), but could also have more than a 5,000 citations (5 papers with 1,000 citations each and one paper with 6 citations). Of course, in reality these extremes will be very unlikely.

However, it is true that once a paper belongs to the top h papers, its subsequent citations no longer count. Hence, in order to give more weight to highly-cited articles Leo Egghe (2006) proposed the g-index. The g-index is defined as follows: [Given a set of articles] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations. Although the g-index has not yet attracted much attention or empirical verification, it would seem to be a very useful complement to the h-index.

Advantages of the h-index and g-index

The h-index and g-index have several important advantages over the Thomson ISI JIF. First of all, these indices do not have an artificially fixed time horizon. The metrics used in the present paper were computed in October 2007 over a five-year period (2001-2005) in order to enable a comparison with the average JIF for 2003-2007 (for further details see the section on procedures). However, any time horizon could be used, rather than focusing on citations in one particular year to the two preceding years as is the case with the Thomson ISI JIFNote 4.

Note 4 - We do acknowledge that a 2-year citation window is not inherent to the JIF. Many bibliometricians work with 5-year citation windows, using data sourced from ISI to calculate their own impact factors. However, Thomson’s published JIF, used by thousands of universities around the world as a measure of journal impact or quality, uses a 2-year window.

Second, the h-index, and to a lesser extent the g-index, attenuates the impact of one highly-cited article, because – unlike citations-per-paper measures such as the JIF – the h-index and g-index are not based on mean scores. As explained above, in a citations-per-paper metric even one highly-cited article can cause a very strong increase in the average number of citations per paper for the journal in question, leading to possibly highly idiosyncratic and variable results.

When we choose to evaluate journal impact through citation impact measures, we are interested in the overall citation impact of the journal, not in the citation impact of one or two highly cited individual papers in that journal. Therefore, just like the h-index for authors provides a robust measure of their sustained and durable research performance (see also Vanclay, 2007), the h-index for journals provides a robust measure of sustained and durable performance of journals, rather than articles.

Third, just like the total number of citations which Bensman (2007) proposes as a credible alternative to the JCR impact factor for a set of Crime-Pyschology journals, both the h-index and g-index are influenced by the number of papers that a journal publishes. A journal that publishes a larger number of papers has a higher likelihood – though by no means certainty – of generating a higher h-index and g-index since every article presents another chance for citations.

This may be seen as a disadvantage when evaluating the standing of individual articles in a journal (or an individual academic based on this metric) as this measure should not be dependent on the number of articles published in that journal. However, one cannot deny that a journal that publishes a larger number of high-impact papers has a bigger impact on the field (see also Gisvold, 1999). Given that impact on the field is what we attempt to measure in this article, this feature of the h-index and g-index could be seen an advantage rather than a disadvantage. However, we do acknowledge that some review journals that publish a relatively small number of highly cited papers might be disadvantaged by our alternative metric.

Methods

Data source

Since our aim was to cover a broader range of journals than in most previous studies we took the Harzing’s Journal Quality List (Harzing, 2007) as our basis. This list includes a collation of twenty different rankings of 838 journals in the broad area of Business and Economics. It appears to have become quite influential: a search for the terms Journal Quality List AND Harzing results in more than 500 Google hits and the list has been cited more than 20 times in ISI listed journals (data for April 2008). The publisher of this list informed us that it is downloaded more than 10,000 times a year and that it draws interest from all over the world.

Procedures

The metrics used in this paper were calculated using Publish or Perish, a software programme that retrieves and analyses academic citations using Google Scholar as a data source. Searches were conducted in the first week of October 2007.

Where relevant we searched for spelling variations of a journal (e.g. British vs. American spelling, the use of and vs. the use of &, spelling of composite words with or without a hyphen). Some journals also have abbreviated titles that are commonly used (e.g. all SIAM journals and many Psychology journals) and hence these were included as alternatives. If a title included very common words, e.g. Journal of Management, we conducted searches with the ISSN instead. Unfortunately, Google Scholar’s results for ISSN searches seem to be rather erratic and hence this alternative was only used if the ISSN search provided a comprehensive result for the journal in question.

The results of all search queries were inspected for incomplete or inconsistent results. This process left us with only two dozen journals (out of 838) that had substantially incomplete coverage and for which metrics could not be calculated. Eight of these were research annuals in book format (the Elsevier Advances in … series and the Research in…. series). For other journals our visual inspection might have overlooked occasional missing articles, but this is unlikely to impact much on robust measures such as the h-index and g-index unless they happen to be highly cited. We have no reason to believe that this was the case. On the contrary, highly cited articles appear to be less likely to be missing from the Google Scholar database than lowly cited or uncited articles.

Our Google Scholar searches included citations to articles published between 2001-2005. This timeframe was chosen to be broadly comparable with a five-year average for the Journal Impact Factors of the last five available years (2003-2006). These impact factors refer to citations in articles published between 2001 and 2005. Ideally, we would have preferred to include the JIF for 2007, but that metric did not come out until half a year after the analysis conducted for this paper. Moreover, given that Google Scholar displays some delay in its data processing for some journals, using the 2006 JIF is likely to give a dataset that is fairly comparable to Google Scholar in October 2007.

Supplementary analyses reported below with regard to the extent of concentration of citations within a particular journal were conducted in October 2007 with the general search function of ISI that allows the user to rank articles by citation.

Results and Discussion of the Benchmarking Analysis

Overall Comparison of JIF and h-Index

There are 536 journals in the Journal Quality List that have both an ISI JIF for 2003-2006 and a Google Scholar h-index or g-index. The Spearman correlation – used because both the JIF and h/g-index have non-normal distributions - is strong and very significant:

  • 0.718 (p < 0.000) between the ISI JIF and the h-index,
  • 0.717 (p < 0.000) between the ISI JIF and the g-index, and
  • 0.976 (p < 0.000) between the h-index and g-index.

Given that these two sets of indices have different data sources (ISI Thomson JCR versus Google Scholar) and provide different metrics (a mean citations per paper count over 2 years for the ISI JIF and a combined quantity/quality measure over 5 years for the h-index and g-index) this strong correlation is quite remarkable. Given the extremely high correlation between the h-index and g-index, in the remainder of this paper we will focus on the h-index and provide a comparison between the ISI JIF and the h-index.

Figure 1 : JIF versus h-index for all journals

Figure 1 shows a scatterplot of the average ISI JIF for 2003-2006 and Google Scholar h-index for articles published between 2001 and 2005. A line shows the regression equation. Outliers above the line are journals that have a high JIF in comparison to their h-index. Most of the major outliers above the regression line are Psychology journals, which generally – similar to journals in the Sciences – have very high immediacy index, i.e. a lot of citations to these journals occur quickly after publication.

For example, the 2006 immediacy index for the Annual Review of Psychology (4.091) is more than ten times as high as that of the American Economic Review (0.335). This means that when comparing these two journals over a 2-year period the Annual Review of Psychology will always show a higher impact factor than the American Economic Review, whereas the difference will be much smaller if we consider a 5-year period. This example clearly illustrates the folly of comparing ISI JIFs between disciplines.

The other major outlier is ACM Computing Surveys, which has had wildly varying JIFs over the years, from .64 in 2001 and 2.77 in 2002 to impact factor between 7.4 and 10.0 2003-2005. However, the very high impact factors in these years appear to be mainly caused by two very highly cited articles published in 2002 and 2003 (for an even more striking case of this phenomenon see the discussion of SIAM Review below), whilst most of the relatively small number of papers published in this journal (63 over five years) are not particularly well-cited.

The two main outliers below the regression line are The Journal of Finance and The American Economic Review that have a very high h-index in comparison to their JIF. This difference is most likely caused by the fact that articles in these journals are cited very heavily in working papers (e.g. papers from the National Bureau of Economic Research or the Tinbergen Institute) and government policy documents, neither of which are included in Thomson ISI JIF.

Overall Comparison without Psychology Journals and Major Outliers

Figure 2: JIF versus h-index, Psychology journals and other outliers excluded

Figure 2 shows a scatterplot of the average ISI JIF and Google Scholar h-index with the exclusion of Psychology journals and the three outliers discussed above: ACM Computing Surveys, Journal of Finance and the American Economic Review. In this figure the most striking outlier is SIAM Review, with several Sociology/Geography journals also showing a high JIF in comparison to their h-index. Other less striking outliers are also visible in Figure 2, but these will be discussed in the review of the sub-disciplines below.

SIAM Review had an average JIF of 2.75 between 2001 and 2003 and a JIF of 2.67 in 2006. However, in 2004 and 2005 its JIF was 6.12 and 7.21 respectively causing a very high average JIF between 2003 and 2006.

Reviewing the JIF for 2004 and 2005 in detail showed that the very high impact factor was nearly entirely caused by the very large number of citations to one particular journal article published in 2003 (The structure and function of complex networks by MEJ Newman). In October 2007 this particular article had been cited 998 times, twelve times more than the next highest cited article published in 2003. In fact, in October 2007 the Newman paper alone makes up for 80% of the citations to SIAM review in 2003; the other twenty papers published in 2003 together have only 249 citations. This example clearly shows the danger of relying on mean-value metrics, which can be heavily influenced by individual outliers.

The Sociology/Geography outliers are caused by a less extreme occurrence of the same problem, i.e. very concentrated citation scores. For the Annual Review of Sociology for instance, the top 3 (out of 104) papers make up nearly one third of the total number of citations. Hence whilst its JIF may be reasonably high, its h-index is modest, as citations taper off quickly after the first highly-cited papers.

Analysis of Individual Sub-disciplines

The Journal Quality List includes journals in fifteen different sub-disciplines. However, for some of these sub-disciplines only a small number of journals are included, either because the sub-discipline is a very specialised area (e.g. Innovation, Entrepreneurship, Tourism) or because the Journal Quality List only includes a small subset of journals in the sub-discipline in question (e.g. Psychology/Sociology) that are relevant to Economics & Business.

Overall, there are seven sub-disciplines that have a substantial number (more than 60) journals included in the JQL:

  • Economics
  • Finance & Accounting
  • General Management & Strategy
  • Management Information Systems & Knowledge Management
  • Management Science/Operations Research/Production & Operations Management
  • Marketing
  • Organization Studies/Behavior and Human Resource Management & Industrial Relations.

Taken together these seven sub-disciplines cover 75% of the journals in the Journal Quality List.

Table 2 provides some summary statistics for these seven sub-disciplines and show there is significant variability in terms of the proportion of ISI-indexed journals in the different fields, ranging from a low of 30-43% for Finance & Accounting, Marketing and General Management & Strategy to a high of 74-80% for Economics, Management Information Systems and Management Science & Operations Research/Management.

The sub-disciplines also differ in terms of the strength of correlation between the h-index and the JIF, varying from 0.633 for Organization Behaviour/Studies; Human Resource Management & Industrial Relations to 0.891 for General Management & Strategy, but in all cases this correlation was significant at p < 0.000. Below we provide a detailed benchmarking analysis for each of these sub-disciplines.

Table 2: Summary statistics
Sub-field No. of journals in the JQL No. of ISI-indexed journals Spearman correlation between h-index & JIF
Economics
Finance & Accounting
General Management & Strategy
Mgmt Information Systems & Knowledge Mgmt
Mgmt Science; Operations Research; Production & Operations Mgmt
Marketing
Organization Behaviour/Studies; HRM & Industrial Relations
Others
168
94
63
81
87
65
71
209
122 (74%)
28 (30%)
27 (43%)
61 (75%)
70 (80%)
25 (38%)
45 (63%)
158 (76%)
0.732***
0.721***
0.891***
0.774***
0.733***
0.841***
0.633***
0.764***
Total 838 536 (64%) 0.718***

*** p < 0.000

Economics

Figure 3: JIF versus h-index for Economics journals (excl. American Economic Review)

Figure 3 shows the relationship between the Google Scholar h-index and the JIF for Economics journals visually. Most journals cluster around the regression line. Two important outliers that show higher journal impact factors than h-indices are the Journal of Economic Literature and the Quarterly Journal of Economics, both of which have a JIF above 4.5, but modest h-indices.

  • Journal of Economic Literature publishes a relatively small number of articles per year (15-20), so that even though most of these are highly cited, it will be difficult for the journal to achieve a very high h-index. This is almost an exact counter case to the American Economic Review, which publishes around 160-170 articles per year that on average are not as highly cited as articles in the Journal of Economic Literature. Overall, however, the American Economic Review has a much larger total number of articles that are highly cited and we therefore argue that the h-index correctly identifies its more substantial contribution to the field of Economics.
  • In the case of the Quarterly Journal of Economics, it seems that Google Scholar is at fault as it misses a number of highly cited papers in this journal. Its automatic parsing mechanism seems to have misclassified them under the wrong journal. For instance "Understanding social preferences with simple tests" by Charness & Rabbin (2002) is assigned to the (non-existing?) journal Technology. Several other papers are listed under their earlier (and highly cited?) publications as a NBER working paper. In this case it is clear we should ignore the Google Scholar results and give preference to the ranking presented by the ISI JIFNote 5.

Note 5 - When we repeated the search for this journal in April 2008, its h-index had increased to 102 and all of the originally articles missing were correctly ascribed to the journal. Hence it appears that Google Scholar is continuously upgrading its service.

The other outliers are less prominent, though we can distinguish a number of health economics journals that are likely to display the high immediacy of Science journals and hence fare better on the 2-year JIF.

  • The Journal of Economic Geography and the Journal of Economic Growth also publish a relatively small number of papers (15-25/year) and hence their JIFs are quite heavily influenced by a rather small number of highly cited papers. As a result their h-index is relatively low in comparison to their JIF.
  • The JIFs for Journal of Economic Geography for 2004 and 2005 also seem to have been inflated by a highly cited editorial. As explained above citations to editorial materials and book reviews are included in the numerator of the JIF, but they are not included in the denominator, thus artificially inflating the JIF.
  • Demography publishes a larger number of papers (app. 200 in 5 years), but also has a highly concentrated citation pattern, with its most cited paper between 2001 and 2005 taking up nearly 10% of total citations.

On the other side of the spectrum are Review of Economics & Statistics, Research Policy and European Economic Review that have a relatively high Google Scholar h-index in comparison to their more modest ISI JIF. The main reason for this appears to be that all three journals show a large number of citations in working papers and policy documents, or journals not covered by ISI. Hence the h-index captures their significant impact beyond academic journals.

Finance & Accounting

As Table 2 shows there are 94 journals in the Finance & Accounting category (93 excluding the Journal of Finance), but only 28 have both an ISI JIF and a Google Scholar h-index; only 30% of the Finance & Accounting journals listed in the JQL are ISI indexed. The correlation between the ISI JIF and the GS h-index for these journals is slightly higher than average at 0.721 (p < 0.000).

Figure 4: JIF versus h-index for Finance & Accounting journals (excl. Journal of Finance)

Figure 4 shows that although many journals cluster close to the regression line, there are a number of significant outliers.

  • Two important outliers on the left-hand side (i.e. higher JIF than h-index) are Journal of Accounting & Economics and Review of Accounting Studies.
  • Journal of Accounting & Economics has rather variable JIFs. In 2001-2002 and 2004-2005 its average JIF was around 1.7 which would place it very close to the regression line. However, in both 2003 and 2006 its impact factor more than doubled. Reviewing the individual articles revealed a small number of highly cited papers in 2001 and 2005, which – given the limited number of papers published yearly in this journal – have a significant impact on its JIF.
  • Review of Accounting Studies has only been ISI indexed recently and only has a JIF for two years (2005 and 2006). Its JIF for 2006 is substantially higher than that for 2005. A review of individual articles published in 2005 showed one highly cited paper (The role of analysts' forecasts in accounting-based valuation: A critical evaluation by Q Cheng) that with 78 cites in October 2007 made up nearly two thirds of the total citations to articles in 2005; the remaining fifteen articles on average had only 3 cites each. Again a concentrated citation pattern, combined with a small number of published papers (51 in 5 years) results in a high JIF without a similar impact on the h-index.

Important outliers on the right-hand side (i.e. higher h-index than JIF) are: Journal of Money Credit & Banking, Journal of Banking & Finance, Journal of International Money & Finance and International Journal of Finance & Economics. Papers in these journals often deal with issues relating to stock markets, credit rating and exchange rates and tend to be cited quite often in working papers (e.g. from the National Bureau of Economic Research) and policy documents (e.g. from the Federal Reserve Bank), or in journals not covered by ISI. As a result their Google Scholar h-index is much higher than their JIF that only measures impact in academic journals listed in ISI.

General Management & Strategy

As Table 2 shows out of the 63 journals in the General Management & Strategy category, there are only 27 (43%) that have both and ISI and a Google Scholar Ranking. However, for those journals that are ISI indexed the correlation between the JIF and their GS h-index is 0.891 (p < 0.000), the highest of all sub-disciplines.

Figure 5: JIF versus h-index for General Management & Strategy journals

It is therefore not surprising that there are relatively few important outliers in Figure 5. The main outliers on the left-hand side (high JIF in comparison to h-index) are two of the absolute top journals in the field of Management: Administrative Science Quarterly and Academy of Management Review, which have the highest impact factors of any journal in General Management & Strategy, but have a relatively lower h-index.

Administrative Science Quarterly

With regard to Administrative Science Quarterly, this appears to be caused mostly by the limited yearly number of papers published. Even though most of the articles published in this journal are fairly well-cited, Administrative Science Quarterly only published a total of 92 papers (excluding editorials and book reviews) in the 5-year period and hence its ability to achieve a very high h-index is limited.

One should also consider that even in a top journal such as Administrative Science Quarterly, the citations received by individual articles are highly skewed: the ten most highly cited papers received 30% of the total citations, whilst the twenty most cited papers received 50% of the total number of citations.

Journal of Management, Journal of Management Studies and Journal of International Business Studies show h-indices comparable to that of Administrative Science Quarterly, even though they are generally seen to be lower in standing. However, these journals publish about two (for Journal of Management and Journal of International Business Studies) to three times (for Journal of Management Studies) as many papers per year as Administrative Science Quarterly and hence have a higher likelihood of reaching a high h-index.

Academy of Management Review

With regard to the Academy of Management Review, citations are even more heavily skewed than for Administrative Science Quarterly. The top 4 most cited papers (dealing with key concepts such as social capital, absorptive capacity and the resource-based view (2x)) provide 21% of the total number of citations. Surpassing even Administrative Science Quarterly, the 10 most cited papers provide 34% of the total citations and again the twenty (out of 153) most cited papers received 50%.

Hence even though at 46 Academy of Management Review has one of the highest h-indices for General Management journals, its concentrated citation pattern means that its h-index is still low in comparison to its JIF. And even though with around 150 papers (excluding editorials and book reviews) over five years it publishes more articles per year than ASQ, its empirical counterpart (Academy of Management Journal) publishes twice as many papers as Academy of Management Review and hence has a higher likelihood of reaching a high h-index.

Furthermore, more than half of the Academy of Management Review papers are classified as either editorials or book reviews. Citations to these non-source materials are included in the numerator of the JIF, but the non-source materials are not included in the denominator. Normally, this would not result in a significant distortion of the JIF as these non-source materials tend not to be highly cited in the field of Management, but the paper-length introductions to the many special issues and forums are also classified as editorials and these pieces tend to be highly cited.

Other journals

The larger number of papers published (200-550 papers over 5 years) is also likely to lie behind the relatively high h-index for Strategic Management Journal, Harvard Business Review, Sloan Management Review, Journal of Business and Journal of Management Studies.

Further, the two practitioner journals (Harvard Business Review and Sloan Management Review) are also likely to be more highly cited in scholarly policy documents that are not incorporate in ISI. The same is likely to be true for Journal of Business that has many papers that would be cited in working papers (e.g. from the National Bureau of Economic Research) and policy documents (e.g. from the Federal Reserve Bank). Further, the UK-based Journal of Management Studies is likely to be more heavily cited in non-ISI listed European journals.

Harzing and Van der Wal (2008) also showed that Strategic Management Journal scores much better in Google Scholar than the general management journals, because many Strategy and IB journals that would heavily cite articles in SMJ are not ISI listed. As a result Strategic Management Journal has a higher h-index than Academy of Management Journal even though its JIF is lower.

Management Information Systems; Knowledge Management

Figure 6: JIF versus h-index for Management Information Systems and Knowledge Management journals (excl. ACM Computing Surveys)

As can be seen in Figure 6, the various ACM Transactions (on Data Base Systems, on Software Engineering and Methodology, on Information Systems) generally have low h-indices compared to their JIF. The same is true for Information Systems, Information Systems Research, Human Computer Interaction, MIS Quarterly and the Journal of Database Management.

  • For the ACM Transactions this is most likely caused by the fact that they publish relatively few papers (between 60 and 90 over 5 years). This means on the one hand that an individual highly cited paper can substantially increase the JIF for certain years and on the other hand that it is more difficult to achieve a high h-index.
  • The same is true for Human Computer Interaction which has highly variable JIFs ranging from 1.95 to 4.78. Again, a small number of published papers (60, excluding editorials), combined with individual highly-cited outliers and a substantial proportion of non-source materials (editorials) which account for 10% of the citation count, drive up the JIF, whilst not having the same impact on the h-index.
  • Information Systems, Information Systems Research and MIS Quarterly also have widely varying JIFs (0.90-3.33, 1.17-3.51 and 1.80-4.98 respectively) and although they publish a larger number of papers than HCI, the number of published papers is still relatively small (100-150) and individual highly-cited outliers as well as highly cited editorials can still have a substantial impact on the JIF in some years. In MIS Quarterly for example, the top 4 most cited papers published between 2001-2005 (out of 138 papers) make up 25% of the total number of citations in October 2007.
  • The Journal of Database Management has only been ISI-indexed in 2006 and hence its high JIF score might be idiosyncratic. It also publishes few papers (about 15 a year) and hence its ability to achieve a high h-index is limited.

The various IEEE Transactions and Communications of the ACM have a high h-index in comparison to their relatively modest JIF. This is partly due to the fact that articles in these journals are often cited in conference proceedings, which are the most important publication outlets in this field, but are not included in the ISI citation count. However, this is true to a large extent for the ACM Transactions as well.

The main difference between the two groups of journals is the number of articles they publish. For the various IEEE Transactions this lies in the 350-600 range for a five-year period and for IEEE Transactions on Automatic Control even exceeds 1400, while for Communications of the ACM it approaches 1000. As a result, JIFs do not fluctuate as widely as for the other group of journals as they are not influenced by individual highly cited papers. On the other hand, the larger publication base makes it easier to achieve high h-indices, reflecting these journals’ substantial impact on the field.

Management Science; Operations Research, Production & Operations Management

As Table 2 shows out of the 87 (86 without SIAM Review) MS/OR/POM journals in the Journal Quality List 70 (80%) are ISI indexed. The correlation between the ISI JIF and the GS h-index for these journals lies slightly above the average at 0.733 (p < 0.000) and is highly significant.

Figure 7: JIF versus h-index for Management Science/Operations Research/Production and Operations Management journals (excl. SIAM Review)

As figure 7 shows, both of the journals of the Royal Statistical Society as well as Annals of Statistics have relatively high JIFs in comparison to their Google Scholar h-index. This is caused by the fact that even though they publish a reasonably large number of papers, citations are highly concentrated. For JRSS-A and Annals of Statistics 25% of the citations go to the top 5 most cited papers (out of some 150), for JRSS-B it is even only the two most cited papers (out of some 235) that make up 25% of total citations, with top-10 most cited papers making up 50% of the total number of citations. As we have seen before, a concentrated citation score will always artificially inflate the JIF and lead to a lower h-index in comparison to the JIF.

At the other end of the spectrum Management Science, European Journal of Operational Research and Operations Research have Google Scholar h-indices that are relatively high in comparison to their JIF.

  • Management Science publishes a large number of papers (nearly 600 over five years) and has a very evenly spread citation pattern with the top 10 most highly cited papers in ISI in October 2007 having a very similar number of citations (between 41 and 55) and making up less than 10% of the total number of citations.
  • European Journal of Operational Research publishes even more papers (nearly 2000 over five years) and show a similarly even spread in citations, with the top 10 most highly cited papers making up less than 5% of the total number of citations.
  • Operations Research publishes fewer papers than the other two (but still nearly 400 over five years), but again has a very evenly spread citation pattern, with the top 10 most highly cited papers all having between 23 and 29 citations for a total of 13% of the total number of citations.

As a result the JIFs for these journals are generally very stable and not influenced by individual highly-cited papers. The large number of papers and even spread of citations ensures a high h-index, reflecting these journals’ substantial impact on the field.

Marketing

As Table 2 shows there are only 25 journals in the Marketing category that have both and ISI and a Google Scholar Ranking, 40% of the 65 Marketing journals in the Journal Quality List. However, for those journals that are ISI indexed the correlation between the JIF and their Google Scholar h-index is 0.841 (p < 0.000), the second highest of all sub-disciplines. It is therefore not surprising that there are relatively few important outliers in Figure 8.

Figure 8: JIF versus h-index for Marketing journals

The two top journals in Marketing, Marketing Science and Journal of Marketing, have JIFs that are relatively high in comparison to their Google Scholar h-index.

  • Marketing Science has experienced an important increase in its JIF over recent years, from an average of 1.90 between 2001 and 2003, which would locate it close to the regression line, to an average of 3.72 between 2004 and 2006. This increase seems to have been caused by about a dozen of reasonably well-cited papers, few of which, however, were cited often enough to become part of the h-index. However, with Journal of Marketing, Journal of Marketing Research and Journal of Consumer Research, Marketing Science is still one of the four journals with the highest h-index in Marketing.
  • Journal of Marketing’s JIF has also increased substantially over recent years. After an average of 2.35 in 2001-2002, it was 2.85 on average in 2003-2004, which would pretty much locate it right on the regression line. It then jumped up to an average of 4.5 in 2005-2006. A more detailed analysis showed that this high recent JIF was mainly due to two highly-cited articles published in 2004 (which are in fact the two most highly-cited papers in the entire 2001-2005 period) that dealt with two key new issues in marketing: return on marketing and the service dominant logic. In October 2007 these two articles had a combined total of 200 citations, whilst the remaining articles in 2004 on average had about 10 citations. The recently released JIFs for 2007 show that Journal of Marketing’s JIF dropped from 4.83 to 3.75.

Industrial Marketing Management and Journal of Business Research are the two most important outliers at the other end of the spectrum with a relatively high Google Scholar h-index in comparison to their modest JIF. Both journals publish a relatively large number of papers (350 to 600 over five years) increasing their chances of reaching a high h-index. They are also cited quite often in journals that are not ISI indexed, such as European Journal of Marketing, Journal of Business to Business Marketing and Journal of Business & Industrial Marketing, which increases their Google Scholar h-index over their JIF and better reflects their overall impact on the field of marketing.

Organization Behaviour/Studies; Human Resource Management & Industrial Relations

At 0.633 (p < 0.000) this sub-discipline has the lowest correlation between the JIF and the GS h-index, but it is still high and very significant. Excluding the Human Resource Management and Journal of Organizational Behavior Management outliers (see below) raises this correlation to 0.685 (p < 0.000).

Figure 9: JIF versus h-index for Organizational Behavior/Studies and Human Resource Management/Industrial Relations journals (opens in a new window)

As figure 9 shows several journals have a high JIF in relation to their Google Scholar h-index. The most striking cases are Human Resource Management and Journal of Organizational Behavior Management.

Human Resource Management

Harzing & Van der Wal (2008) already noted the discrepancy between Human Resource Management’s JIF and h-index.

Further investigation and an extensive email exchange with Thomson ISI representatives revealed that Thomson’s search query for this journal’s JIF included a substantial number of homographs referring to Human Resource Management Review, Human Resource Management Journal as well as books with Human Resource Management in their title.

As a result the JIF for Human Resource Management was erroneously inflated. In fact at 0.64 the recently released 2007 JIF for Human Resource Management is very substantially lower than the 2002-2006 JIF average of 2.00. Although we have not been able to verify this, it is possible that the equally generic title Industrial Relations, which also has a higher JIF than h-index, suffers from the same problem.

Journal of Organizational Behavior Management

The JIF for Journal of Organizational Behavior Management has fluctuated wildly, standing at for instance 1.79 in 2003 and 0.11 in 2004. In 2004 there were only two citations to articles published in 2003 and 2002. In 2003 there were no less than 52 citations to articles published in 2002 and 2001. All of the 34 citations to JOBM in 2002 came from JOBM articles (i.e. a 100% self-citation on a journal level), whilst 11 of the 18 citations to articles published in 2001 came from within the journal.

Further investigation showed that a very large proportion of the citations to papers in 2002 consisted of within-issue citations in a special issue. The third and fourth most cited of all papers in JOBM between 2001 and 2005 were published in this issue.

  • For the first paper seven out of its eleven citations were in the same special issue (the other four being in later issues of the same journal).
  • For the second paper four of its ten citations were in the same special issue (five of the remaining six were in later issues of the same journal).
  • Two other papers in this special issue had four (three) of their four (three) citations within the same issue. Three more papers in this special issue each had two of their two citations in this special issue.
  • The 8th paper in this issue had four of its four citations in one and the same later issue of JOBM. The ninth paper in this special issue had no citationsNote 6.

Note 6 - It appears that the 22 within-issue citations were counted in 2003 rather than in 2002 and hence erroneously contributed to the 2003 JIF. The immediacy index, which measures the number of citations in the same year, for 2002 only shows 3 citations in 2002 to articles published in 2002. An independent search with the cited work function in the Web of Science only shows 2 citations to 2002 articles in 2003 (both in the same article, published in JOBM) rather than the 34 citations listed in the Journal Citation Reports.

Coincidentally, the special issue’s topic was "The Search for the Identity of Organizational Behavior Management". Calling this special issue might well have been the best thing the editor ever did for the identity of the journal.

The journal’s relatively low Google Scholar h-index is due to Google Scholar’s less than complete records for this journal. Google Scholar appears to have processed the papers published in the special issue, but does not seem to have parsed the citations in these papers. In this case, however, we would argue that this is probably for the best and even though Google Scholar is to some extent at fault, it provides a more realistic assessment of this journal’s low impact beyond a seemingly rather small academic circle.

Other journals

On the other end of the spectrum we find journals that have a high h-index in comparison to their JIF. Journal of Human Resources publishes many policy oriented pieces dealing with the public sector that are heavily cited in working papers and policy papers as well as books not included in ISI. As a result the h-index better reflects its impact beyond academia.

The Journal of Business Ethics, Human Relations and International Journal of Human Resources Management, all published out of Europe, combine a relatively large number of published papers with citations that are evenly spread across papers, as well as a high number of citations in European journals not indexed in ISI, all three of which increase their Google Scholar h-index in comparison to their JIF. Hence the h-index provides a more accurate reflection of their relatively broad impact on the field.

Overall conclusions about sub-discipline analysis

Overall, we have seen that there is a very substantial agreement between the ISI JIF and the Google Scholar h-index for most subdisciplines. This means that for those sub-disciplines that have very limited ISI coverage (Finance & Accounting, Marketing and General Management & Strategy) the Google Scholar h-index could provide an excellent alternative for the 56-70% of journals not covered in ISI, especially given that for these disciplines the correlation between JIF and h-index for journals that did have ISI coverage ranged between .72 and .89. However, even for the other sub-disciplines the additional coverage provided by Google Scholar could be useful.

Where the ISI JIF and the Google Scholar h-index diverged this was generally caused by one of two factors.

Factor 1: JIF's sensitivity to one-hit wonders

The sensitivity of the JIF to individual highly cited papers, which - especially for journals that publish relatively few papers – artificially inflates the JIF in comparison to the h-index. Using mean scores produces distorted results if distributions are non-normal.

In this respect, the h-index is a more robust measure. It is true that the h-index is influenced by the number of papers published and hence that journals that publish a lot of papers have a better chance to reach a high h-index. However, we would still argue that journals producing a larger number of highly cited papers have more impact on the field even if the average article in this journal is less highly cited than the average article in a journal that publishes fewer articles.

As an illustrative example, journals such as Strategic Management Journal, the Academy of Management Journal, Organization Science and Management Science have higher h-indices than the Journal of Marketing and the Academy of Management Review (on average 55 versus on average 46), whereas they have much lower JIFs (on average 2.32 versus on average 3.95). However, as we have seen above, the high JIFs of the latter two journals are mostly caused by a very limited number (2-4) highly cited papers. Hence, we would argue that the h-index provides a more accurate picture of the generally very similar standing of these six journals.

Factor 2: Google Scholar's broader coverage

The key reason why some journals showed a relatively high h-index in comparison to their JIF lies in the broader coverage of Google Scholar. There are four main aspects to this broader coverage.

  • Some journals have a strong policy impact and their articles are highly cited in policy documents or NBER working papers, neither of which are included in ISI.
  • Articles in other journals, often published outside North America, are heavily cited in non North-American journals that are often not ISI indexed.
  • Third, articles in some journals, especially in the area of computer science/information systems, are highly cited in conference papers, which tend to be the most important publication outlets in this field.
  • Finally, some sub-disciplines such as Strategy and International Business generally have a low coverage in ISI (see Harzing and Van der Wal (2008) for a more detailed discussion of these factors). As a result the h-index better reflect the broad coverage of these journals.

Overall, we therefore argue that the Google Scholar h-index provides a more comprehensive measure of journal impact, in terms of both the number of journals and the number of citations covered.

Numerical Analysis of the Divergence between JIF and h-Index

Above, we have documented a number of cases where the JIF and h-index diverged in considerable detail. Table 3 provides a summary of the 50 most prominent cases of divergence. It was constructed by standardizing both the JIF (after giving a 0 score for all journals without a JIF) and the h-index and subtracting the JIF from the h-index. Since Psychology journals differed considerably from all other journals and would make up most of the top 25 on the left-hand side, we excluded them from the analysis.

Most of these journals have been discussed in some detail above. Apart from apparent errors in the ISI JIF and one deficiency in Google Scholar coverage, the major reason for a high JIF in comparison to the h-index appears to relate to journals that publish a small number of papers and/or have highly concentrated citation papers, where the top 10 most cited articles provide the bulk of citations. As a result the mean-type the JIF presents a less accurate reflection of a journal’s overall impact than the h-index.

Reviewing the journal titles in the right-hand column, the single most important determinant of a high h-index in comparison to JIF seems to be the extent to which the journal publishes policy oriented papers that are highly cited in working papers and policy documents. Publishing a large number of papers and being cited in conference papers and non-ISI indexed journals provide secondary reasons for a high h-index. Overall, the h-index might therefore more suitable to measure a journal’s wider economic or social impact rather than its impact on an academic audience only.

Table 3: the 50 most prominent cases of divergence between JIF and h-index
JIF exceeds h-index (top 25)   h-index exceeds JIF (top 25)  
ACM Computing Surveys
SIAM review
Quarterly Journal of EconomicsNote 7
ACM Transactions on Information Systems
Human Computer Interaction
Journal of Economic Literature
Academy of Management Review
Progress in Human Geography
Human Resource Management
Journal of Economic Geography
Annual Review of Sociology
Marketing Science
MIS Quarterly
Journal of Economic Growth
ACM Trans. on Softw. Eng. and Methodology
Journal of Database Management
Journal of Marketing
Annals of the Assoc. of American Geographers
Review of Accounting Studies
Environment & Planning D
Journal of Organizational Behavior Management
Journal of Rural Studies
American Sociological Review
Administrative Science Quarterly
Structural Equation Modeling
-6.65
-4.90
-3.81
-3.52
-3.46
-2.74
-2.28
-2.26
-2.09
-2.08
-2.03
-1.89
-1.89
-1.72
-1.68
-1.68
-1.58
-1.45
-1.45
-1.44
-1.40
-1.36
-1.28
-1.26
-1.15
American Economic Review
Journal of Finance
European Economic Review
Communications of the ACM
Review of Economics & Statistics
Research Policy
European Journal of Operational Research
Econometrica
Journal of Banking & Finance
Management Science
Journal of Public Economics
Journal of Political Economy
World Development
Journal of Financial Economics
IEEE Trans. on Knowledge & Data Engineering
Economic Journal
European Journal of Political Economy
International Organization
International Journal of Project Management
American Journal of Public Health
Journal of Econometrics
Journal of Money, Credit & Banking
ACM Trans. on Computer Human Interaction
Journal of Knowledge Management
Review of Economic Studies
5.30
3.93
2.77
2.70
2.38
2.24
2.10
2.03
1.89
1.79
1.78
1.76
1.65
1.63
1.61
1.59
1.54
1.50
1.46
1.40
1.39
1.38
1.38
1.38
1.35

Note 7 - As indicated above, this was the only case in which we could identify a clear deficiency in Google Scholar coverage that caused the low h-index.

As we indicated above there are many journals, especially in Finance & Accounting, Marketing and General Management & Strategy that are not ISI-indexed. So how do these journals compare with journals that are ISI-indexed? As expected, journals that are ISI-indexed in general have a significantly higher h-index (23.5 versus 11.5; t = 15.002, p < 0.000).

But are there any non-ISI indexed journals with a relatively high h-index? In order to assess this we divided the journals by h-index into 4 equal groups. The quartile cut-off points for the h-index were 11, 16 and 24. Journals that ranked in the two top 50% (16 and above) in terms of h-index, but are not ISI-listed are listed in Table 4.

Table 4: Non-ISI indexed journals with a high h-index
Journal Title h-index Published in:
European Journal of Political Economy
International Journal of Project Management
ACM Transactions on Computer Human Interaction
Journal of Knowledge Management
Empirical Economics
Accounting Horizons
European Management Journal
Journal of Empirical Finance
European Journal of Marketing
Journal of Environmental Management
Journal of Financial Services Research
Review of Finance
European Financial Management
Journal of Business Finance & Accounting
Accounting, Auditing & Accountability Journal
Business Strategy & the Environment
International Journal of Physical Distribution & Logistics Management
McKinsey Quarterly
Academy of Management Learning & Education
Electronic Markets
European Journal of Work and Organizational Psychology
Human Resource Management Journal
Journal of Industrial Ecology
Journal of International Development
Management Accounting Research
Economics of Innovation and New Technology
German Economic Review
Industry and Innovation
International Business Review
Applied Financial Economics
Critical Perspectives on Accounting
Electronic Commerce Research
Information Technology and People
International Journal of Retail & Distribution Management
Journal of Business Logistics
Journal of Consumer Marketing
Journal of Financial Research
Journal of Services Marketing
Journal of Supply Chain Management
Managing Service Quality
Asia-Pacific Journal of Management
Business & Society
European Accounting Review
Human Resource Management Review
Information and Organization
International Journal of Quality & Reliability Management
Journal of Business & Industrial Marketing
Journal of Computational Finance
Journal of International Management
Journal of Public Economic Theory
Journal of Travel Research
Knowledge and Process Management
Management Decision
National Institute Economic Review
28
27
26
26
24
23
23
23
22
22
22
22
21
21
20
20
20
20
19
19
19
19
19
19
19
18
18
18
18
17
17
17
17
17
17
17
17
17
17
17
16
16
16
16
16
16
16
16
16
16
16
16
16
16
Europe
Europe
USA
Europe
Europe
USA
Europe
Europe
Europe
Europe
USA
Europe
Europe
Europe
Europe
Europe
Europe
USA
USA
Europe
Europe
Europe
USA
Europe
Europe
Europe
Europe
Europe
Europe
Europe
Europe
USA
Europe
Europe
USA
Europe
USA
Europe
USA
Europe
Asia
USA
Europe
Europe
Europe
Europe
Europe
USA
Europe
USA
USA
Europe
Europe
Europe

Journals with a high h-index that are not ISI listed occur in all disciplines, but are more frequent in the sub-disciplines identified above as having low ISI coverage. However, the single most distinguishing shared characteristic of these journals seems to be that they are published from Europe (usually by Blackwell, Elsevier, or Emerald) and generally have a European editor and a large proportion of non-US academics on the editorial board. Overall, nearly three quarters of the non-ISI indexed journals with a high h-index are European journalsNote 8.

Note 8 - We do not wish to imply that the ISI selection process has a bias against non-US journals. Without having knowledge of the actual number of European versus US journals that are submitted for possible inclusion in the database, it is impossible to assess this. It is possible that European editors display a self-selection bias and simply do not submit their journals for inclusion into ISI. Of course one reason for this could be the perceived bias against non-US journals. A similar debate is raging with regard to the representation of non-US authors in US-based journals.

Discussion and conclusions

In this paper we systematically compared the ranking of journals based on the traditionally used Thomson ISI JIF and the new Google Scholar based h-index. We have shown that any divergence between the two can generally be explained by either limitations in the way the JIF is calculated or by the more limited coverage of the ISI citation base.

We do acknowledge that our alternative metric disadvantages review journals with a small number of highly cited paper, but we conclude that in the field of Economics and Business in general, the Google Scholar based h-index provides a credible alternative for ranking journals. It addresses some of the statistical limitations underlying the JIF and is more suitable to measure a journal’s wider economic or social impact rather than its impact on an academic audience only.

As such we argue that the Google Scholar h-index might provide a more accurate and comprehensive measure of journal impact and at the very least should be considered as a supplement to ISI-based impact analyses.

However, even though an assessment of journal impact based on the journal’s Google Scholar h-index might be more accurate and comprehensive than relying only on an ISI-based impact analysis, we would like to express strong caution against a single-minded focus on journal impact in evaluating individual scholars’ research output.

Whilst journal impact can certainly be used as one of the criteria to evaluate research output, reducing the evaluation to one single number is unlikely to provide a complete picture of a scholar’s real impact. Recently, many studies have established that highly-cited articles get published in journals that are not considered top journals in the field and a substantial proportion of the articles published in top journals fail to generate a high level of citations (see e.g. Starbuck, 2005, Oswald, 2007 and Singh, Haddad & Chow, 2007).

Hence using journal proxies to evaluate the impact of individual articles can lead to substantial attribution errors. Unfortunately, the impact of individual articles is not generally known until quite some after their publication, making this measure more appropriate for decisions on for instance promotion or appointment to full professorial positions, than for tenure decisions.

A more fundamental question is whether evaluating research by the extent to which it is cited by other academics is a complete measure of impact. We would argue that what we should equally be considering in applied areas of research is whether the research in question makes a difference by providing insights into fundamental managerial or societal questions. However, this assessment might be quite difficult to make and will always include some element of subjectivity.

True managerial or societal impact might also not be apparent in the short term. Hence, although individual article impact and broader managerial and societal impact would be important to include in the evaluation of research output wherever appropriate and possible, most universities will by necessity place some emphasis on the use of journal impact proxies. In this article, we provided a broader perspective on journal impact and hope this will lead to a more valid and equitable assessment of academics’ research output.

References

  1. ABS (2007). Journal Quality Guide, downloaded from http://www.the-abs.org.uk/?id=257.
  2. Bailey, J.R. (2002). Educating Rita. Academy of Management Learning & Education, 3, 197.
  3. Baylis, M., Gravenor, M .& Kao, R. (1999). Sprucing up one’s impact factor. Nature, 401, 322.
  4. Bensman, S.J. (2007) The impact factor, total citations, and better citation mouse traps: A commentary. Journal of the American Society for Information Science and Technology, 58(12), 1904-1908.
  5. Bloch, S. & Walter, G. (2001). The Impact Factor: time for a change. Australian and New Zealand Journal of Psychiatry, 35, 563-568.
  6. Braun T, Glänzel W, & Schubert A (2005). A Hirsch-type index for journals. The Scientist 19, 8.
  7. Bornmann, L. & Daniel, H-D. (2007). What do we know about the h index? Journal of the American Society for Information Science and Technology, 58, 1381-1385.
  8. Bornmann, L., Mutz, R. & Daniel, H.-P. (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59(5), 830-837.
  9. Baumgartner, H. & Pieters, R. (2003). The structural influence of marketing journals, a citation analysis of the discipline and its subareas over time. Journal of Marketing. 67, 123-39.
  10. Cameron, B.D. (2005). Trends in the Usage of ISI Bibliometric Data, Uses, Abuses, and Implication. Portal: Libraries and the Academy. 5, 105-125.
  11. DuBois, F. L. & Reeb, D. (2000). Ranking the international business journals. Journal of International Business Studies. 31, 689-704.
  12. Egghe, L. (2006). Theory and practice of the g-index. Scientometrics. 69, 131-152.
  13. Gisvold, S.E. (1999). Citation analysis and journal impact factors – is the tail wagging the dog? Acta Anaesthesiol Scand. 43, 971-973.
  14. Glänzel, W. (2006). On the opportunities and limitations of the h-index. Science Focus. 1, 10-11.
  15. Gioia, D.A.; Corley, K.G. (2002). Being good versus looking good: Business School rankings and the circean transformation from substance to image. Academy of Management Learning & Education, 1, 107-120.
  16. Gowrishankar J. & Divakar, P. (1999). Sprucing up one’s impact factor. Nature, 401, 321-322.
  17. Groot, T. & Garcia-Valderrama, T. (2006). Research quality and efficiency. An analysis of assessments and management issues in Dutch economics and business research programs. Research Policy, 35, 1362-1376.
  18. Ha, T.C.,Tan, S.B. & Soo, K.C. (2006). The Journal Impact Factor: Too Much of an Impact? Annals Academy of Medicine, 35, 911-916.
  19. Harzing, A.W. (2007). Journal Quality List, 28th Edition, http://www.harzing.com/
  20. Harzing, A.W. & Wal, R. van der (2008). Google Scholar as a new source for citation analysis?, Ethics in Science and Environmental Politics, vol. 8, no. 1, pp. 62-71, published online January 8, http://www.int-res.com/articles/esep2008/8/e008pp5.pdf.
  21. Hirsch, J.E. (2005). An index to quantify an individual's scientific research output, arXiv:physics/0508025 v5 29 Sep 2006.
  22. Kousha, K.; Thelwall, M. (2007) Google Scholar Citations and Google Web/URL Citations: A Multi-Discipline Exploratory Analysis, Journal of the American Society for Information Science and Technology, 58, 1055-1065.
  23. Kousha, K; Thelwall, M. (2008) Sources of Google Scholar citations outside the Science Citation Index: A comparison between four science disciplines. Scientometrics, 74(2), 273-294.
  24. Leydesdorff, L. (2008) Caveats for the use of citation indicators in research and journal evaluations. Journal of the American Society for Information Science and Technology, 59, 278-287.
  25. McDonald, S.; Kam. J. (2007). Ring a ring o' roses: Quality journals and gamesmanship in Management Studies. Journal of Management Studies, 44, 640-55.
  26. McGarty, C. (2000). The citation impact factor in social psychology: a bad statistic that encourages bad science. Current Research in Social Psychology, 5, 1-16.
  27. Meho LI, Yang K (2007) A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar. Journal of the American Society for Information Science and Technology, 58, 1-21.
  28. Mingers, J. & Harzing, A.W. (2007). Ranking journals in Business and Management: A statistical analysis of the Harzing Dataset. European Journal of Information Systems, 16, 303-316.
  29. Morgeson, F.P.; Nahrgang, J.D. (2008). Same as it ever was: recognizing stability in the Business Week Rankings, Academy of Management Learning & Education, 7, 26-41.
  30. Nederhof, A.J. & Raan, A.F.J. van (1993). A bibliometric analysis of six economics research groups: A comparison with peer review. Research Policy, 22, 353-368.
  31. Oswald, A. J. (2007). An examination of the reliability of prestigious journals: Evidence and implications for decision-makers. Economica, 74, 21-31.
  32. Pfeffer, J.; Fong, C.T. (2002). The end of Business Schools? Less success than meets the eye. Academy of Management Learning & Education, 1, 78-95.
  33. Reedijk, J. (1998). Sense and nonsense of science citation analyses: comments on the monopoly position of ISI and citation inaccuracies. Risks of possible misuse and biased citation and impact data, New J. Chem, 767-770.
  34. Rinia, E.J., Leeuwen, Th.N. van, Vuren, H.G. van & Raan, A.F.J. van (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria. Evaluation of condensed matter physics in the Netherlands. Research Policy. 27, 95-107.
  35. Saad, G. (2006). Exploring the h-index at the author and journal levels using bibliometric data of productive consumer scholars and business-related journals respectively. Scientometrics 69, 117-120.
  36. Sanderson, M. (2008) Revisiting h measured on UK LIS and IR academics, Journal of the American Society for Information Science and Technology, 59, 1184-1190.
  37. Segalla, M. (2008). Editorial: Publishing in the right place or publishing the right thing. European Journal of International Management, 2, 122-127.
  38. Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314, 497-502.
  39. Singh, G., Haddad, K.M., & Chow, C.W. (2007). Are articles in top management journals necessarily of higher quality? Journal of Management Inquiry, 16(4), 319-331.
  40. Starbuck, W.H. (2005). How much better are the most-prestigious journals? The statistics of academic publication. Organization Science, 16(2), 180–(200.
  41. Tahai, A. & Meyer, M. (1999). A revealed preference study of management journals' direct influences, Strategic Management Journal, 20, 279-296.
  42. Vanclay, J.K. (2007). On the Robustness of the h-index. Journal of the American Society for Information Science and Technology, 58, 1547-1550.
  43. Vaughan, L.; Shaw, D. (2008) A new look at evidence of scholarly citations in citation indexes and from web sources. Scientometrics, 74(2), 317-330.
  44. Whitehouse, G.H. (2002). Impact factors: facts and myths. European Radiology, 12: 715-717.

Related videos

Find the resources on my website useful?

I cover all the expenses of operating my website privately. If you enjoyed this post and want to support me in maintaining my website, consider buying a copy of one of my books (see below) or supporting the Publish or Perish software.

Aug 2022:

More...
Nov 2022:

More...
Feb 2023:

More...
May 2023:

More...

h-index google scholar web of science white papers