Is ISI misunderstanding the Social Sciences?
Please note: This blogpost discusses a 2013 paper, partly based on data collection in 2010. Since then, the problems highlighted below have been addressed for recent publications.
Despite the availability of alternatives such as Scopus, Google Scholar, and Microsoft Academic, Thomson Reuter’s (now Clarivate) Web of Knowledge (or ISI for short) is still used in the majority of benchmarking analyses and bibliometric research projects. Many papers have documented problems with ISI’s limited coverage in the Social Sciences & Humanities and – to a lesser extent – Engineering. For a good summary of the differences in coverage see my blog on research metrics. The paper below, however, deals with another Web of Knowledge limitation that disproportionally affects the Social Sciences: ISI’s misclassification of journal articles containing original research into the “proceedings paper” or the “review” category.
- Harzing, A.W. (2013) Document categories in the ISI Web of Knowledge: Misunderstanding the Social Sciences?, Scientometrics, 93(1): 23-34. Available online... - Publisher’s version [read for free]
How does ISI categorise proceedings papers?
a document in a journal or book that notes the work was presented - in whole or in part - at a conference. This is a statement of the association of a work with a conference. [part of an FAQ on the Thomson Reuters website that has since disappeared]
So wait a minute! Simply presenting an early version of your ideas in a 5-15 minute conference slot means that your paper is downgraded by ISI to be a “conference proceedings paper”? This is true even if the conference in question doesn’t even publish proceedings and can happen if you are presenting at a small seminar or workshop, perhaps attended by less than a dozen people.
Indeed, when verifying several journal articles categorised as “proceedings papers”, I found that acknowledgements in their articles carried innocent notes such as: “A portion of this paper was presented at the annual meeting of the Academy of Management, San Diego, 1998” or “An earlier version of this paper was presented at the Annual Meeting of the Academy of Management, Chicago, 1999” or "This paper builds on and extends remarks and arguments made as part of a 2006 Keynote Address at the Interdisciplinary Perspectives on Accounting Conference held in Cardiff, UK".
ISI misunderstands the fundamentals of the Social Sciences
To me this classification shows a very limited understanding of the research proces, the review process, and the publication process in the field of Business & Management and many other research fields in the Social Sciences:
- Misunderstanding the role of conferences: Any research paper worth its salt will have been presented in at least one conference or workshop. In fact, I would consider it very unlikely that more than an incidental paper would be accepted for publication in a top journal in our field without ever having been presented publicly to receive feedback.
- Misunderstanding the journal review process: Yes, early versions of a paper might have been presented at conferences. However, the paper that is subsequently submitted to a journal will normally be vastly different from the paper that was earlier presented at a conference. Conferences and workshops are often used as a means to test and polish ideas. Even if authors submit fairly polished papers to conferences, these papers will still generally need to go through 2-4 rounds of revisions before they are accepted for the journal.
- Misunderstanding the publication process: A longer and more extensive process of revision is likely for the many papers that are not accepted by the first journal approached. As acceptance rates of top journals in the Management field are well below 10%, papers are often submitted to several journals before they get their first revise & resubmit. Maturation of the author(s)’ ideas, reorientation toward different journals, as well as the review process itself means that virtually every paper published has been substantially revised. Hence, the end-product published by a journal often bears very little resemblance to the paper that was originally presented at a conference, years before publication.
So are all papers presented at conferences categorised as conference papers?
No, this happens only to those papers whose authors explicitly acknowledged in the paper that early versions of the paper had been presented at a conference, or to papers whose authors were kind enough to simply thank participants of a particular workshop for their input. A nice reward for being professional and collegial!
The first version of this paper was written in 2010. When I revisited ISI’s proceedings paper classification in 2011, every single proceedings paper published in a journal was now double-badged as an “article” document type (without adapting the FAQ above). Even though Thomson Reuters did not acknowledge the validity of my concerns or change their FAQ, they did change their categorization practices.
Whilst this change of policy is of course good news, it begged the question why Thomson maintains the proceedings paper category for journal articles at all. It seems like Thomson Reuters started to ask themselves the same question… After a couple of years, most (though by no means all) of the double-badging disappeared and articles were now properly categorised as articles.
Why does ISI categorise journal articles as reviews?
With the conference proceedings problem "resolved", this leaves us with the review category. Why are some papers that clearly present original research, published in the top journals in our field, categorised as derivative work that synthesises work of other academics? According to Thomson Reuters: simply because they have more than 100 references! No, I am not joking. Thomson Reuters says:
In the JCR system any article containing more than 100 references is coded as a review. Articles in "review" sections of research or clinical journals are also coded as reviews, as are articles whose titles contain the word "review" or "overview."
When verifying this criterion for articles classified as reviews in 2010, I found Thomson Reuters to have applied their criteria absolutely as described. For instance, Michael Lounsbury's 2001 Administrative Science Quarterly article had 95 references and was categorised in the "article" document type, thus acknowledging it is original research. His 2004 article in Social Forces with 101 references was categorised in the "review" document type, even though the paper has sections titled “Theory and Hypotheses” and “Data and Methods”. In addition, the abstract and even the title clearly refer to empirical work. If this scholar wanted Thomson Reuters to recognise his work as original, maybe he should have been a bit less conscientious in identifying the contributions of other authors in his literature review?
Same journal, same issue, same method: different document type!
This particular discovery also solved a query that had puzzled me for some time: why was one of my two articles in the 2007 special issue on International HRM in Human Resource Management categorised as an “original” research article and the other as “derivative” review, even though the latter was based on-time consuming data collection amongst some 850 subsidiaries of multinational companies in three countries?
The subtitle of the paper was “An Empirical Investigation of HRM practices in Foreign Subsidiaries” and the paper in question won the best paper award for the journal that year, not exactly an honour one would expect to be bestowed on derivative work. The answer turned out to be very simple. My co-author, Markus Pudelko, had displayed just a little too much of his wide reading and understanding of comparative HRM (at least according to Thomson Reuters), as the article included 103 references. [Please note: this error has now been corrected, but only after I specfically asked for this paper to be correctly classified].
Why the cut-off of 100 references?
Thomson Reuters does not list any particular rationale for why papers with more than 100 references should be considered to be review articles that do not contain original research. It is true that a “real” review article providing, for instance, a literature review of 30 years of publications in a particular field will tend to have many references. However, the reverse certainly does not hold true, there are many papers with more than 100 references that are not review articles. One cannot presume that there is a direct relationship between the number of references contained in a paper and its level of originality.
Thomson Reuters also does not provide any rationale for the seemingly arbitrary cut-off point. Perhaps they simply saw 100 as a nicely convenient round figure? Fortunately, Thomson Reuters seems to have remedied this erroneous practice for papers published since Februay 2010. This is just in time as – because of the increasing reference lists over the years – we were on the brink of having most articles in the top Management journals classified as derivative work. In 2009 one third of the articles published in the Academy of Management Journal, the Academy of Management Review, and Organization Science and no less than half of the articles in Administrative Science Quarterly were classified as derivative review articles.
How about the other disciplines?
Turning to the Sciences, the use of the proceedings papers category was virtually non-existent in Chemistry, Mathematics, and Neurosciences. Either papers are not presented at conferences in these fields to protect intellectual property or authors are less diligent in thanking their colleagues for feedback. In Computer Science/Software Engineering, the proceedings paper category was used more frequently. This does make sense as conference proceedings are quite relevant in these fields and the classification of papers in this field was indeed correct.
The review classification was also much rarer in the Sciences than in the Social Sciences. Moreover, in nearly all cases the review classification was justified. Given that original research articles in the Sciences normally tend to have far fewer than 100 references, the 100+ cut-off is quite successful in identifying review articles in the Sciences.
The two examples above clearly indicate that Thomson Reuters was indiscriminately applying Science-based criteria to the Social Sciences. Although the misclassifications were resolved for articles that were published after I first highlighted the problem in my January 2010 white paper,* they are still present for older papers.
By properly differentiating criteria between the Sciences and the Social Sciences, Thomson would substantially improve classification accuracy. Although the Science Citation Index predates the Social Science Citation Index, there is no reason to use Science based criteria to evaluate the Social Sciences!
* Interestingly, those journals that published an issue in January 2010 (e.g. Organization Science and the Academy of Management Review) still had many of their articles classified as reviews in that issue. This clearly suggest that the classification practice was changed after the publication of my white paper on the 20th of January.
Drop me a line
A free pre-publication version of this paper is hyperlinked. If you’d like to have an official reprint, just drop me an email.
Related blog posts
- Publish or Perish: Realising Google Scholar's potential to democratise citation analysis
- Bibliometrics in the Arts, Humanities, and Social Sciences
- From h-index to hIa: The ins and outs of research metrics
- Citation analysis for the Social Sciences: metrics and data-sources
- Working with ISI data: Beware of categorisation problems
- Bank error in your favour? How to gain 3,000 citations in a week
- Microsoft Academic is one year old: the Phoenix is ready to leave the nest
- Why metrics can (and should?) be used in the Social Sciences
- Running the REF on a rainy Sunday afternoon: Do metrics match peer review?
Copyright © 2019 Anne-Wil Harzing. All rights reserved. Page last modified on Mon 27 May 2019 11:21
Anne-Wil Harzing is Professor of International Management at Middlesex University, London and visiting professor of International Management at Tilburg University. She is a Fellow of the Academy of International Business, a select group of distinguished AIB members who are recognized for their outstanding contributions to the scholarly development of the field of international business. In addition to her academic duties, she also maintains the Journal Quality List and is the driving force behind the popular Publish or Perish software program.