Citation analysis for the Social Sciences: metrics and data-sources

In the last decades, the use of metrics for research evaluation seems to have become an integral part of the academic landscape. The adverse impact of this “audit culture” is well documented (see e.g. Adler & Harzing, 2009). However, since the reversal of this trend is unlikely, research into fairer and more inclusive ways of measuring research performance is gaining more and more momentum.

Research into fairer and more inclusive measurement of research performance

My own bibliometric research since 2007 has been part of that movement. My latest article in this stream of research provides a longitudinal and cross-disciplinary comparison of the three major databases for citation analysis: Google Scholar, Scopus and the Web of Science. As it was presented at a workshop on research evaluation in Madrid, it comes complete with a set of slides and a youtube video of the presentation.


Peer review vs. bibliometrics

Although a range of studies has found strong correlations between peer review and bibliometric indicators, most governmental research assessment exercises rely solely on peer review for the Social Sciences and Humanities as bibliometric coverage is deemed insufficient for these disciplines.

When executed properly and without bias, peer review is no doubt preferable to the exclusive use bibliometric indicators. On the other hand, peer review is very time-consuming and has an inherent element of subjectivity. Assessors are expected to base their assessment purely on the quality of the publications submitted. However, in practice both the perceived quality of the journal in which the article was published, and the perceived status of the university affiliation of the author(s) are likely to create positive or negative halo effects.

Comparing Google Scholar with the Web of Science

In Harzing (2013) I therefore investigated the use of Google Scholar, which has a much better coverage for the Social Sciences and Humanities than the traditional source of citation data: Thomson Reuter’s Web of Science (also known as ISI). A study of 20 Nobelists in Medicine, Physics, Chemistry and Economics showed that Google Scholar displayed considerable stability over time. In addition, coverage for disciplines that had traditionally been poorly represented in Google Scholar (Chemistry and Physics) was increasing rapidly. Google Scholar’s coverage was also comprehensive; all of the 800 most cited publications by our Nobelists could be located in Google Scholar.

A follow-up study (Harzing, 2014) comparing 2012 and 2013 coverage, found that - after a period of significant expansion for Chemistry and Physics - Google Scholar coverage was now increasing at a stable rate. The increased stability and coverage might thus make Google Scholar much more suitable for research evaluation and bibliometric research purposes than it has been in the past.


Use the right metrics for cross-disciplinary comparisons

With research librarian Satu Alakangas, I thus embarked on a large-scale comparative project of 146 academics across 37 disciplines in five broad disciplinary areas (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We collected quarterly publication and citation data for Google Scholar, Scopus and Web of Science for two years (July 2013-July 2015).

Our longitudinal comparison of eight data points between 2013 and 2015 showed a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.

Our cross-disciplinary comparison of the three databases included four key research metrics: publications, citations, h-index, and hI,annual, an annualised individual h-index introduced by Harzing, Alakangas & Adams (2014). We showed that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons.

More specifically, we find that when using the h-index as a metric and the Web of Science as a data source, Life Science and Science academics dramatically outperform their counterparts in Engineering, the Social Sciences and Humanities.However, when using the hI,annual and Google Scholar or Scopus as a data source, Life Science, Science, Engineering and Social Science academics all show a very similar research performance; whereas the average Humanities academic has a hI,annual that is half to two thirds as high as the other disciplines.

We thus argue that a fair and inclusive cross-disciplinary comparison of research performance is possible, provided we use a data source with more comprehensive coverage, such as Google Scholar or Scopus, and the recently introduced hI, annual - a h-index corrected for career length and co-author­ship patterns - as the metric of choice.

Other papers referred to in this blog

Drop me a line

Free pre-publication versions of these papers are hyperlinked. If you’d like to have an official reprint for these papers, just drop me an email.

Generated by Cphyl (2017.03.27.1326A)