1.2.1 From ranking journals to ranking articles
Traditionally, journal rankings were used to evaluate the research impact of individual academics. Hence, rather than measuring the impact of an academics individual articles, universities and governments would use the ranking of the journal (based on stated or revealed preference) as a proxy for the quality and impact of an academics articles.
Although this practice is still common, the realization that this might lead to sub-optimal conclusion is gradually beginning to take hold. Although on average articles in top-ranked journals can expect more citations (this is the very essence of the Journal Impact Factor, discussed in Section 1.4.1), there is a wide variance. Several articles have shown unambiguously that highly-cited articles can be published in lower-ranked journals, whilst many articles published in top-ranked journals fail to gather a substantial number of citations. Based on their research, Singh, Haddad & Chow (2007: 319) warn that:
“…both administrators and the management discipline will be well served by efforts to evaluate each article on its own merits rather than abdicate this responsibility by using journal ranking as a proxy for quality.”
This is a particularly important recommendation today when an increasing number of universities either require publications in the top three to five journals in a scholars discipline or completely disregard publications in journals outside of those few identified as “top” (Singh et al., 2007; Van Fleet, McWilliams & Siegel, 2000). Whereas there may be reasons to use journal rank as one indicator of article quality, there is no reason to use it as the only or even the best measure.
Using an academics actual citation record rather than the journals he or she has published could also be argued to be a more objective way to measure research impact. Acceptance of an article for publication into a (top) journal is influenced by a very small number of gatekeepers (the editor and 1 to 3 reviewers). Although these gatekeepers can be expected to be dispassionate and well-informed experts, nearly every academic can relate experiences of bias in the review process. In contrast, citations to ones work are the collective “verdict” of the market, where a far larger number of users decide on the impact of ones work.