Myth busted: most academic research DOES get cited
How click-bait was transformed into “fact” through poor academic referencing practices
"Urban legends are not just fascinating, entertaining, and colorful stories; they are also a part of our social communicative repertoire. When we lose them, we have to find other things to talk about, which are most likely not as funny, engaging, or bridge building as urban legends” (Rekdal, 2014a: 649).
High expatriate failure rates (Harzing, 1995, 2002), ant extinctions (Wetterer, 2006), the iron-content of spinach (Rekdal, 2014a), sinking sheep (Rekdal, 2014b), and sky-high levels of uncited academic articles (Hamilton, 1990/1991, Meho, 2007) are all examples of academic urban legends, stories that many academics believe to be true but largely are not. In this white paper, I focus on the last of these lores, i.e. the claim that most academic research doesn’t get cited.
This particular urban legend is more than an amusing story; it is a dangerous misconception. Continued promulgation of this myth is likely to decimate trust in our academic research, especially in the current political climate. In this white paper I’ll show that – to a large extent – this myth is of our own making, created through poor academic referencing practices. Hence, in order to restore trust in our academic research, we need to stop seeing referencing as a boring chore, and instead put accurate, complete, and relevant referencing to reliable sources centre stage in our writing.
Please note that my criticism in this white paper only concerns the way in which authors handle data, references, and referenced data with regard to high levels of uncitedness. It does not extend to the authors’ overall research efforts, which might well be extremely valuable. I would also like to caution readers against scapegoating individual authors for creating the myth of high uncitedness. Doing so would negate our community’s collective responsibility to address poor academic referencing practices and stop sharing amusing horror stories about academia, thus hindering efforts to regain trust in our academic research.
This a very long white paper running to nearly 20,000 words. Hence, I have included a detailed Table of Contents below, allowing readers to jump to individual sections.
Table of contents
The case for the prosecution: most academic articles are not cited (and worthless)
- Exhibit #1: Poorly executed analyses published as news items in Science
- Exhibit #2: A journalist reporting on an unknown presentation
… and the academic who traced the origin of this myth (Remler 2014)
The case for the defence (of Science): most articles are cited or have other impact
- Exhibit #3: Uncitedness depends on a wide range of factors.
- Document type: comparing apples and oranges
- Time: citation speed differs by discipline
- Time: uncitedness has declined over time
- Data sources: some are more “inclusive” than others
- Number of co-authors: the more authors the more citations
- Publication language: English rules
- Citation accuracy: errors translate into lost citations
- Discipline: accumulation of all prior factors leads to large variance
- Exhibit #4: Empirical verification: Management articles in the 2020s are cited
- Exhibit #5: Uncitedness doesn’t mean lack of impact (or worth)
The verdict: click-bait transformed into “facts” through poor academic referencing
Why do we as academics keep believing the myth?
Introduction
Citation practices lie at the core of our academic interactions; they enshrine in print the conversations we have with each other. Even in the “good old days” before the advent of neoliberalism in academia, academic referencing practices were already problematic (see: Are referencing errors undermining our scholarship and credibility?). In fact, my very first academic article “The persistent myth of high expatriate failure rates” (Harzing, 1995) dealt with the risk of creating myths through inaccurate referencing.
More recently, academics have even been shown to cite non-existing articles (see: The mystery of the phantom reference). There is also mounting concern about high levels of self-citations, gender bias in citations, citation cartels, a lively trade in citations, and – more recently – GenAI making up citations. So, citation practices are definitely not without flaws. But if correct and appropriate referencing is so important, why do we as academics promulgate citation myths ourselves?
One of the more commonly circulated “truths” about citations is that most articles never get cited and thus have negligible impact. Estimates of uncitedness run from about half of all publications (Hamilton, 1990) and three quarters in Business & Management (Hamilton, 1991) to a whopping 90% (Meho, 2007). These figures are reiterated both in academic publications and on social media, often joined by an explicit or implicit assessment that most of what we do as academics is worthless.
Having been heavily involved in citation analysis since the launch of my Publish or Perish software nearly 20 years ago, I have always found this claim hard to believe. It just didn’t match my daily experience. Over the years, I have run analyses for many thousands of academics and journals, and most of their articles were cited at least once. Granted, some uncited publications were always present, but they were typically book reviews, conference abstracts, letters to the editor, short notes, editorials, or calls for papers, all publications that we would not expect to be highly cited or even cited at all. So how can this discrepancy be explained? Is the lack of trust in the quality of our academic research justified or is the existence of sky-high levels of uncitedness another urban legend?
Channelling three of my favourite childhood professions (detective, librarian, defence lawyer), I doned my tools of detection and argumentation and started building the case for and against Science. First, I’ll look at the case for the prosecution: most of what we publish isn’t cited (and is thus worthless?), reporting the evidence for this provocative statement. Then, I’ll present the case for the defence of Science, evidencing that most of our work is cited and that even if it isn’t, this shouldn’t lead us to conclude our work is worthless. I’ll then put on my judge’s hat, weighing up the evidence, and presenting the verdict. For those of you who can’t wait, the verdict is quite damning for our profession’s academic referencing practices: by continuing to cite – in our own academic work – the dubious articles that formed the basis of the claim for high uncitedness, we have transformed click-bait into “fact”.
I show how we did this through violating at least seven of the twelve guidelines that I presented in Harzing (2022) (see: Are referencing errors undermining our scholarship and credibility?). But why did we feel compelled to do so? I suggest it was because it made us feel good, because it conveniently justified our own research, and because as academics we are only human too, and thus subject to the same flawed information processing we blame “the (wo)man in the street” for. However, the consequences of our behaviour could be devasting for the public’s trust in academic research, especially at a time where experts are not exactly the flavour of the day. So, please, let’s refrain from repeating amusing horror stories that demonstrate the uselessness of our academic research without first seriously questioning their evidence base. If we as academic don’t insist on evidence, who will?
The case for the prosecution (of Science): most academic articles are not cited (and thus worthless)
There are two main sources for the assertion that most of our academic articles are not cited, or not even read, and thus by implication are worthless. Below, I will discuss them as Exhibit #1 and #2. What they have in common is that neither support their claims with well-articulated methods and evidence. This is not entirely surprising as the “offending” statements were in fact written by journalists. It promptly led other journalists to question academia.
Schwarz (1997:22): Newsweek interpreted ISI’s reports [AWH: Hamilton’s 1990/1991 articles] to mean that nearly half the scientific work in this country is basically worthless. It went on to call for more rigorous standards in federal funding programs and wound up describing scientists, with their belief in their God-given right to taxpayer dollars, [as] welfare queens in white coats.
Exhibit #1: Poorly executed analyses published as news items in Science
The first source of our case for the prosecution constitutes of two brief pieces in the News & Comments section in Science, published in 1990/1991, i.e. 35 years ago.
- Hamilton D.P. (1990) Publishing by – and for? – the Numbers, Science 250 (4986): 1331-1332
- Hamilton D.P. (1991) Research papers: Who’s uncited now? Science 251(4989): 25-25.
The articles in question were published by Science journalist David P Hamilton, who wrote no fewer than 271 short pieces for Science between April 1990 and November 1992. Of these, the above two are – with 183 and 135 registered citations in Science’s online system – two of his three most cited articles. Apart from another highly cited piece about Saturn’s E Ring (135 citations), there is only one more article – expressing doubts about scientific integrity – that reaches a double-digit citation level. The remaining articles either have no citations (around 60%), one citation (around 19%) or 2-6 citations (around 19%). This is to be expected for news items and clearly demonstrates why assessing the level of uncitedness of publications without taking the type of publication (e.g. full-length article, research note, letter to the editor, news item) into account is unwise. We will discuss this in much more detail as Exhibit #3 in our section “The case for the defence of Science”.
Ironically, these articles are not even Hamilton’s most read pieces, with the 1990 article ranking at #11 and the 1991 article ranking at #23. Out of the 10 most read articles, 6 have no citations, whilst the other four have respectively 6,1, 2, and 1 citation. This demonstrates that citations and reads are not always directly related, and that an article can have an impact even if it is not cited. We will return to this observation as Exhibit #5 in our section “The case for the defence of Science”. Judging by the unusually high number of citations, it is clear though that Hamilton’s articles struck a chord with academics. So below we will first discuss his two news pieces, the two rebuttals to them, and the citations to these four pieces in some detail.
Publishing by – and for? – the Numbers
Based on Web of Science (WoS) citation data – provided by the Institute for Scientific Information’s (ISI) analyst David Pendlebury – showing that “55% of the papers published between 1981 and 1985 in journals indexed by the institute received no citations at all in the 5 years after they were published” (Hamilton, 1990: 1410), David Hamilton leads his news piece with the statement: “New evidence raises the possibility that a majority of scientific papers make negligible contributions to knowledge” (see image) and later in the article asks the provocative question: “Does this mean that more than half - and perhaps more than three-quarters - of the scientific literature is essentially worthless?” (see image).
Hamilton asked twenty academicians, federal officials, and science policy analysists for their reactions to the WoS citation data. Those mentioned in Hamilton’s article all work in the Life or Natural Sciences, Science’s main audience. Two cautioned against overinterpreting the 55% figure, another one said he didn’t know what to make of it. However, the eight remaining scientists featured in the article seemed to immediately jump to conclusions and were quick to relate uncitedness to increasing publication volumes. Here are some of their views:
"It is higher than I'd have expected," "It indicates that too much is published. A lot of us think too much is published."
"It is pretty strong evidence of how fragmented scientific work has become, and the kinds of pressures which drive people to stress number of publications rather than quality of publications."
“My God! That is fascinating -- It's an extraordinarily large number. It really does raise some serious questions about what it is we are doing.”
"It does suggest that a lot of work is generally without utility in the short-term sense."
There are obvious concerns which are worrisome – namely that the work is redundant, it’s me-too type of follow-on papers, or the journals are printing too much.”
Hamilton ends the article with a teaser for his second article which he promises will report on uncitedness by discipline and says, “it’s bad news for social scientists”. In that context, it is interesting that voices from academics in the Social Sciences and Humanities are completely absent from both of Hamilton’s articles. Many academics in these disciplines could have told him instantly that there are good reasons for why the WoS doesn’t report as many citations for the Social Sciences and Humanities as for the Life and Natural Sciences. We will discuss these reasons in more detail later. First, however, let’s move Hamilton’s second article.
Research papers: Who’s Uncited Now
Hamilton’s second article – published a month after the first – differentiates the level of uncitedness by discipline and is framed in a – to me – rather distasteful way. He starts the article with “Scientists who like to one-up their colleagues in other disciplines can now do so in a new way” (see image). This appears to be the main focus of the article, i.e. to create some sort of ranking of levels of uncitedness by discipline, where higher-scoring disciplines can look down with glee on lower-scoring disciplines for out-performing them in the citation game.
It is a smaller scale study than Hamilton (1990), looking only at articles published in 1984 (rather than 1981-1985) and citations for the next four (rather than five) years. The very short report – barely more than half a page – mainly lists levels of uncitedness for a large number of sub-disciplines. Several very puzzling and anomalous results are reported, but apparently these didn’t lead either the ISI analysist or the reporter to think twice about them or prompt any further investigations. Engineers and Social Scientists come out at the bottom of the hierarchy but are also advised to “take heart” that they can at least boast at being better than academics in the Art & Humanities (see image).
Pendlebury’s (1991) rebuttal: Science, Citation, and Funding
- Pendlebury D.A. (1991) Science, Citation, and Funding, Letter to the editor. Science 251: 1410-1411.
Hamilton’s two articles report on analyses done by David Pendlebury, research analysist at ISI. Interestingly, Pendlebury immediately posted a 2-page letter in Science distancing himself from the conclusions that Hamilton drew from the analysis, explaining that the quoted uncitedness figures include many types of publications that would be expected to remain largely uncited, such as editorials, obituaries, letters, meeting abstracts and other marginalia. Apparently, this differentiation by document type was not yet available when Hamilton wrote up the articles.
Excluding marginalia reduces both the overall level of uncitedness and the differences in the level of uncitedness between disciplines as marginalia are far more popular in the Humanities and Social Sciences than in the Life Sciences and Natural Sciences. Pendlebury also indicates that articles published in the highest impact journals are almost never left uncited and that “A certain level of "uncitedness" in the journal literature is probably more an expression of the process of knowledge creation and dissemination than any sort of measure of performance.” Pendlebury ends the article with “We hope this information clarifies the record and will end further misunderstanding or politicalization of these statistics.”, thus explicitly discouraging value judgments about the worth of academic research.
Pendlebury’s letter was joined by eight much shorter letters – mostly consisting of just one or two paragraphs – expressing similar concerns about differences in uncitedness by document type, as well as other factors influencing uncitedness, such as discipline and publication language, which will all be discussed in Exhibit #3 for the Defence of Science.
Garfield’s (1998) dream: preventing perpetuation of errors about uncitedness
- Garfield, E. (1998) I had a dream ... about uncitedness. Scientist, 12(14): 10, 6 July 1998.
In an article in the Scientist, Eugene Garfield, the founder of the Science Citation Index, talks about his dreams in the 1960s that his new index would be used to avoid unwitting perpetuation of errors (Garfield, 1998). When that didn’t happen, he retained the hope – at the time of publication in 1998 – that the World Wide Web Access to the Index might accomplish this. In the meantime, he groans about citation errors being perpetuated year after year.
Garfield discusses Hamilton’s two articles as a case in point, indicating how the journalist “zealously criticized scholarship in the sciences and social sciences, but especially in the arts and humanities”. Garfield laments that Hamilton’s “misguided reports on uncitedness have unduly influenced many scholars and policy makers ever since”.
As Hamilton’s claims continued to be cited – despite Pendlebury’s strong rebuttal seven years earlier – Garfield used his Dream article to recap most of Pendlebury’s letter, as if to give it additional weight by republishing the content of the letter as a separate article and attaching his fame to it. Garfield not only reminds the reader of the importance of considering different document types but also warns his audience that typographical errors may defy easy unification of citations. It is particularly ironic then that most of the references to Garfield’s Dream article were incorrectly matched in the Web of Science (see below for more details).
How far have Hamilton’s claims and their rebuttals travelled?
Hamilton’s claims of uncitedness were based on two 1-2 pages news items published in Science 35 years ago, reporting on the number of citations of articles published 40-45 years ago. Remember, this was a time when the World Wide Web didn’t exist yet and personal computers were only starting to gain more widespread presence. To cite these claims as current-day “evidence” that half to three quarters of our academic articles remain uncited (forever?) is quite a stretch. Still, this didn’t stop many academics from doing exactly this. Although WoS citations to Hamilton’s articles have dropped off a bit in the last five years, in the 2020s there are still 32 articles published that are referring to one of the two articles.
As reviewing the entire citation network of 365 articles in the Web of Science would be too time-consuming for the purpose of this white paper, I sampled the last ten citing articles that were accessible to me in full-text. If a citing article included references to both Hamilton articles, it was only counted for the first and another article was included for the second. If the same author was represented more than once in the sample, a replacement article was chosen too. This small-scale investigation (see below for details) suggests that half of these 20 articles reference the two articles in a way that leaves readers to conclude that most of our research remains uncited, whereas up to six others provide at least some scope for this conclusion.
If this translates to the full sample of over 365 referencing articles this would translate to anywhere between 180 and 290 articles in the Web of Science perpetuating this claim. Given that Google Scholar finds more than 700 articles referencing one of the two Hamilton news pieces, and that there may well be additional articles making the same high uncitedness claims by empty referencing – i.e. using as evidence articles without their own citation data that refer to Hamilton – it is likely that this claim has spread very widely indeed.
The two counterclaims by Pendlebury and Garfield were cited far less frequently in the WoS, 72 and 55 times respectively. Again, I decided to sample the last ten citing articles for each to establish the citation context, and I did in fact do so for Pendlebury (1991). However, when the last ten citing articles for Garfield’s 1998 article showed major data linking problems in the Web of Science, I went on to verify all 55. The detailed investigation of these counterclaims is reported below, but in short, I conclude that only around 40 out of the 127 citing articles represented the content of these two critical articles correctly and have cited Pendlebury’s and Garfield’s counterclaims in ways that contributed to debunking the original myth.
Hamilton (1990)
To date (22 Sept 2025) Hamilton’s 1990 article has acquired 200 citations. As indicated above, I reviewed the first 10 of these that I could access in full text. Out of these then, two used Hamilton’s article to evidence very generic aspects of citation analysis and/or the academic publishing process, often embedded in a string of other references. Arguably, a short news item about uncitedness is not the best reference for this, but that’s not our primary concern here. Another included Hamilton in the refence list but didn’t actually cite the publication. Bad practice, but again not our primary concern here.
However, there are also seven articles that repeat the journalist’s claim of high uncitedness. Fortunately, two of those (#2 and #5) subsequently refute this claim through their own empirical research. Even so, someone intent on proving their point that our academic work is largely uncited would have plenty of secondary references to choose from, especially if they did not read the entire article carefully. The relevant sentences are replicated below in reverse chronological order.
- Donovan (2025:2): Perhaps even more distressing for authors has been the finding that the almost desperate effort to increase one’s citations plays out against research demonstrating that approximately 90% of published papers in academic journals are never cited by others, ever (Hamilton, 1990, 1991; Meho, 2007: 32).
- Dias et al. (2023:1) A relatively old and highly debated article in Science (Hamilton, 1990) estimated that a surprisingly high number of articles have no citations. [Goes on to cite more recent evidence that challenged this view and reports on an empirical study that shows uncitedness in Operations Research/Management Science ranges from 1.05 to 3.67%]
- Betancourt et al. (2023:4) While some articles quickly accrue many citations, it is also the case that most published research is forgotten (Hamilton, 1990).
- Singh (2022:6) While academic publishing is important to showcase credibility within academia, published articles themselves can have very little influence outside of, and even within, the narrow domains of a given academic field–estimates of the proportion of uncited peer-review papers vary from 10 to 65% (Hamilton, 1990; Van Noorden, 2017).
- Baruch et al. (2022:1135) It was found that 55% of papers published between 1981 and 1985 had not been cited at all in the first five years after publication (Campanario 1993; Hamilton 1990), but these works did not clarify what they considered a ‘paper’ (for instance, book reviews and editorials were included). This would have helped substantiate the claim that many published papers remain uncited. [Goes on to report on an empirical study that shows that uncitedness reaches 6.5% at most]
- Arnesen (2020:188) We began our exploration of these uncited and low-cited articles by trying to understand exactly how common the phenomenon actually For example, in an article in Science magazine, Hamilton (1990) suggested that 55% of the articles published in journals indexed by the Institute for Scientific Information (ISI) from 1981 and 1985 were uncited in the five years after their publication.
- Thyer et al. (2019:698) A substantial proportion of scholarly articles are never cited (Hamilton, 1990), further highlighting the significant impact the authors of very highly cited papers exert on the field.
Hamilton (1991)
To date (22 Sept 2025) Hamilton’s 1991 article has acquired 165 citations. Of the first ten accessible to me in full text, all repeated the journalist’s claim of high uncitedness in some way. Fortunately, two of these [#2 and #3] also actively refute the myth, whereas another two [#6 and #10] leave at least some ambiguity by citing additional sources. Even so, someone intent on proving their point that our academic work is largely uncited would have plenty of secondary references to choose from, especially if they did not read the entire article carefully.
- Örtenblad & Koris (2025:3): On top of an overproduction of research outputs in a global perspective, too many papers contain similar arguments and over half of academic papers (three-quarters in the area of management studies) remain uncited (Hamilton, 1991).
- Lin et al. (2024:522) Hamilton (1991) reported that 55% of published papers between 1981 and 1985 covered in the Institute for Scientific Information (ISI) analysis received no citation in the first five years following the publication. Hamilton’s essay was later criticized for being misguiding but influential (Garfield, 1991).
- Frachtenberg (2023:9) Some early studies claimed that, generally, most scientific papers are not cited at all (Hamilton, 1991; Jacques & Sebire, 2010; Meho, 2007). More recent research found that the rate of uncited papers keeps decreasing and estimates it to be less than 10% (Wu, Luesukprasert & Lee, 2009)
- Stec (2023:30) High numbers of uncited papers have been identified in previous studies on publication practices in the humanities and social sciences. (Hamilton, 1991)
- Das & Dutta (2022:14) Hamilton reported that on average of 4%, 74.7%, and 98% uncitedness in the disciplines of science, social science and arts & humanities respectively in 1991, while the picture has still remained unchanged even after 30 years.
- Hubbard & Vaaler (2021:2) The level of citedness has been reported in various ways with the percentage of articles cited in the major disciplines being approximately 2–20% for the humanities, 25–70% for the social sciences, and 53–78% for the sciences (Hamilton, 1991; Larivière et al., 2008; Schwartz, 1997).
- Doleys (2021:545) There is already evidence to suggest that a great deal of published scholarship in the field is little used (Hamilton 1991; King 1995).
- Stoller (2020:82) Slightly less than half of all natural science articles receive citation (Hamilton).
- Hancock & Price (2020:217) Other unique attributes of research in these disciplines [AWH: Arts & Humanities] include the production of many studies that remain uncited for years (Hancock, 2015; Hancock & Price, 2016), if ever (Hamilton, 1991).
- Hernández-Torrano (2019: 879) This should be seen as a symptom of the relative good health of the field, considering its emerging status and that the level of uncitedness in Social Sciences, for instance, can vary between 21% (Glänzel et al. 2003) and 75% (Hamilton 1991), depending on the sub-discipline and method of measurement.
Pendlebury (1991)
To date (22 Sept 2025) Pendlebury’s article has acquired 72 citations in the Web of Science. I sampled the most recent ten citing articles that I could access in full text. Out of these ten, two used the article to evidence very generic aspects of citation analysis and/or the academic publishing process, combined with other references. Arguably, a short letter about the myth of uncitedness is not the best reference for this, but that’s not our primary concern here.
Out of the remaining eight articles, four (#2, #5, #7 and #8) referenced Pendlebury’s cautionary note correctly, i.e. in a way that debunked high levels of uncitedness, mostly referring to the importance of considering varying citation levels for different document types. Two other citing articles (#1 and #4) referred to the citation statistics corrected for document type as mentioned in Pendlebury’s article. Though correct, they didn’t represent the core message of his article and – most likely – would still lead readers to assume levels of uncitedness are very high. Also, given that the citing articles were published nearly 28-33 years after the Pendlebury letter, this hardly provides us with concrete evidence of current-day levels of uncitedness.
Two articles cite Pendlebury’s article for statements that are demonstrably incorrect (#3 and #6). The statement that a lack of funding has consequences for research was not mentioned in Pendlebury’s letter at all (though it was mentioned in one of the other letters). Likewise, the statement that authors only cite a small fraction of the material they read or that is influential is not found in Pendlebury. As far as I can assess both statements are true, but they cannot be attributed to Pendlebury.
That said, four to six of the ten statements do in some way critically comment on high levels of uncitedness, the key message of Pendlebury’s article. If this translates to the full sample, between 29 and 43 articles place caveats on high levels of uncitedness.
- Hagiopol & Leru (2024:20) However, fraud represents an extreme case. At the other end, the contributions contain at least one "cognitive extent". In between lie numerous instances of mediocrity. For example,4% of scientific articles published in 1984 remained incited [sic] by the end of 1988, with even higher percent observed in social sciences (48%) and arts and humanities (93%) articles (Garfield, 1998; Pendlebury, 1991). [AWH This is in fact Garfield quoting Pendlebury].
- Lin et al. (2024:521) Pendlebury (1991) posited that the uncited rates reported in Hamilton’s essay were correct, but the interpretations of these rates were not. Pendlebury argued that knowing what is in the numbers is necessary before interpreting them and drawing a conclusion. Pendlebury suggested several factors which could contribute to the uncitedness rate, including field, document type, country, and time. Pendlebury concluded that the uncitedness phenomenon was not as alarming as Hamilton previously suggested.
- Dorta-Gonzalez & Dorta-Gonzalez (2023:1361) However, some papers discussed the lack of funding and the consequences on research, especially on topics that are not perceived as having a high impact (Pendlebury, 1991).
- Katchanov et al. (2023:2) A famous Science article pointed out that 22.4% of science articles remained uncited, as did 48.0% of social sciences articles and 93.1% of those in arts and humanities (Pendlebury, 1991).
- Das & Dutta (2022:14) However, Pendlebury explained the high percentage of uncitedness of SCIjournals. The SCI journals contained not only articles but also other forms of documents like reviews, notes, meeting abstracts, editorials, obituaries, letters, etc., which were, by and large, remained uncited. Pendelbury [sic] defined uncitedness from the viewpoint of ISI's journal coverage.
- Stewart (2019:17) Some analysts claim that authors in many branches of science cite only a small fraction (one-third, on average) of the material that they read or that is influential, and thus citation counts may not reflect an author's honest “usage” of the literature (e.g., MacRoberts and MacRoberts, 1986, 2010; Pendlebury, 1991).
- Nicolaisen & Frandsen (2019: 1227) However, as pointed out by Pendlebury (1991: 1410), the numbers are misguided as they cover all document types including letters, meeting abstracts, editorials, obituaries, and “other marginalia, which one might expect to be largely uncited”.
- de Araujo Lima Barbosa et al. (2019:9) In fact, the number of citations potentially reflects the process of creation and dissemination, which differs between the areas of knowledge (Pendlebury, 1991), the periods covered and the data sources, among other factors.
Rejoinder by Garfield (1998)
To date (22 Sept 2025) Garfield’s article has 55 acquired citations linked to it in the Web of Science. I initially sampled the last ten citing articles only. However, when reviewing the full text of these articles I found eight of these ten didn’t actually cite the Dream article. Instead, they cited another 1998 article by Garfield (Long-term vs. short-term journal impact: Does it matter? The Scientist, 12(3), February 2, 1998) and in one case even a 2006 article (The history and meaning of the journal impact factor. JAMA, 295: 90–93.)
As I thought this might be just a fluke occurring for recent records, I subsequently verified all 55 articles listed in the Web of Science as citing the Dream article. I was unable to source the full text for eight of them, but for five of these I could deduce from the context (i.e. the only topic of the article being the journal impact factor) that they must have cited the journal impact article rather than the Dream article. Out of the remaining 47, no less than 36 cited the journal impact article. Hence, 41 out of the 52 citing articles that I could verify didn’t cite the dream article at all. The Web of Science seems to have conflated two of Garfield’s articles, thus again illustrating how data inaccuracies can impact on the level of (un)citedness.
Below I report all remaining eleven articles that did in fact cite the Dream article. Out of these only #4 and #7-10 unambiguously supported Garfield’s critical message. Two others (#1 and #6) reported content of the article correctly, but the conclusions drawn did not represent the key message of Garfield’s article, i.e. that academics keep perpetuating myths of high levels of uncitedness. Three more (#2, #3, and #5) referenced the article as a fairly generic reference on citation analysis, whereas a final (#11) one in fact attributes a statement to Garfield that is the complete opposite of what he claimed. So, in all, only five out of the 52 articles listed in the Web of Science as citing Garfield’s dream of eradicating citation errors cited it in a way that reflected his key message. Garfield would turn in his grave if he knew…
- Hagiopol & Leru (2024:20) However, fraud represents an extreme case. At the other end, the contributions contain at least one "cognitive extent". In between lie numerous instances of mediocrity. For example,4% of scientific articles published in 1984 remained incited [sic] by the end of 1988, with even higher percent observed in social sciences (48%) and arts and humanities (93%) articles (Garfield, 1998; Pendlebury, 1991). [AWH This is in fact Garfield quoting Pendlebury, so to report this as two separate studies is inappropriate].
- Whitescarver et al. (2023:8) Some journal’s percentages of uncited articles percentages have been shown to vary as much as 80 percent depending on which types of articles are included in the calculation (Garfield, 1998, Tainer, 1991).
- Ranasinghe et al. (2015: 176) We focus on the cardiovascular literature because citation patterns can be variable between subspecialties with autonomous journal networks (Macroberts & Macroberts 1989, 2010, Garfield, 1998).
- Jones (2015:805) The point was also made by Garfield (1998) that a small group of journals account for the vast majority of significant research publications, and the overwhelming majority of articles published in the 200 journals with the highest cumulative impact are cited within a few years of publication, and after five years, uncitedness is almost nonexistent.
- Hu & Wu (2014:137) Some relevant classical reviews (Garfield, 1972, 1991, 1998; Hamilton, 1990, 1991) bring us up to date.
- Law et al. (2013:665) For instance, Garfield (1998) stated that 4% of science journal articles published in 1984 remained uncited within the next few years, while the corresponding percentage for social science was 48.0%. [AWH This is in fact Garfield quoting Pendlebury, so to report this as two separate studies is inappropriate].
- Sen & Patel (2012:107) This was reiterated by Garfield (1998) who pointed out that uncitedness was “almost nonexistent” 5 years after publication in the 200 journals with the highest impact.
- Samuels (2011:783) When the ISI actually calculates impact scores for its Journal Citation Reports, it only counts citations to research articles and review essays (Garfield 1998).
- Yang et al. (2010:756) Garfield (1998) claimed it is necessary to know the content of these uncited papers and define clearly the variables before interpreting them.
- Larivière et al. (2009:858) Though several empirical studies have challenged this belief [AWH of high levels of uncitedness (Abt, 1991; Garfield, 1998; Pendlebury, 1991; Schwartz, 1997, Stern, 1990, Van Dalen & Henkens, 2004), no study has as yet measured the changes in the proportion of cited/uncited articles over a long period of time.
- Herrera-Viedma (2006:538) Some studies show that many articles published in printed books/journals are never cited in any subsequent article; this means that printed articles have a low impact on the development of subsequent new ideas (Garfield, 1998; Hamilton, 1991a; Hamilton, 1991b; Pendlebury, 1991).
Exhibit #2: A journalist reporting on an unknown presentation…
The second main source of the claim that most of our academic articles aren’t cited comes from a contribution by Lokman Meho, a prominent information systems researcher. His overview article about the challenges of citation analysis in Physics World, a member magazine of the Institute of Physics features the following abstract (see image).
“A sobering fact that some 90% of papers […] are never cited”. This is a damning verdict and coming from someone who is an expert in citation analysis – Lokman Meho – it obviously carries a lot of weight. But is it true? In a blogpost at the LSE Impact of the Social Sciences blog Dahlia Remer (2014) asked exactly that, given that there was no reference to substantiate the provocative statement. She contacted Meho by email who related to her that:
“The first paragraph of the article was written by the editor of the magazine and not me. If I recall correctly, he got the figures from/during a lecture he attended in the UK in which the presenter claimed that 90% of the published papers goes uncited and 50% remains unread other than by authors / referees / editors.”
So, we are none the wiser as the (unknown) presenter didn’t provide evidence either (or at least we are not told about this). It might well have been a throwaway comment said in jest after a formal presentation. The practice of a journalist contributing the title and introductory teaser for a paper written by an academic, but appearing in a non-academic outlet, is very common although the academic author of the piece is often uneasy about it. However, in this case it went beyond academic unease and resulted in spawning a very unfortunate two-part myth.
The first myth – 90% of published academic articles are going uncited – is one that everyone who has any common sense, or even just the willingness to run a few simple citation searches, can verify as untrue in mere seconds. The second – 50% of the articles remain unread – is harder to investigate systematically and is not the main topic of this white paper. However, given that most journals now have views / downloads figures, a quick check for any decent journal demonstrates that most articles accrue double or even tiple-digit views within a week of being published in online first. I describe a small-scale experiment here: Improve your Research Profile (8): Tips for time poor academics.
… and the academic who traced the origin of this myth (Remler 2014)
As reported in the previous section, Dahlia Remler didn’t just scratch her head when she read the overblown claim that 90% of articles in academic journals are never cited. She traced the source of this statement and found that it to result from “embellished journalism” (see image above and: https://blogs.lse.ac.uk/impactofsocialsciences/2014/04/23/academic-papers-citation-rates-remler/).
She also goes on to cite another study (Larivière & Gingras, 2007, later published as Larivière, Gingras & Archambault, 2009) that reports that only 12% of the Medical Science articles are not cited, whereas this is the case for 27% of the Natural Sciences & Engineering articles, 32% of the Social Science articles and 82% for the Humanities. These authors looked at 2-year and 5-year citation rates for articles published between 1900 and 2002 and included not only articles and reviews but also notes in their analysis. The data reported by Remler are the 5-year citation rates. The original authors indicate that levels of uncitedness after 10 years are lower, but no exact data are reported. They also show that levels of uncitedness have dropped dramatically over time and especially since the 1980s (i.e. the era from which the original Hamilton claims date).
How far have Meho’s claim and Remler’s rebuttal travelled?
Meho 2007
To date Meho’s 2007 article has acquired 217 citations in the Web of Science, with – on 22 Sept 2025 – fifty citations since 2021 alone. Fortunately, the majority of these citations appears to refer to the substantive part of his article. They are used to evidence various aspects of the science and practice of citation analysis. In that sense, this source is very different from the two Hamilton news items that had uncitedness as their only topic.
Even so, there are still quite a few recent articles repeating the journalist’s unsubstantiated teaser claim, which unfortunately had morphed into becoming the article’s abstract. This makes is quite likely that the claim of 90% uncitedness was picked up as the article’s main contribution by casual readers. As above, I reviewed the most recent ten articles citing Meho’s work that were available to me in full-text. Of these, six referenced various general statements about citation analysis, but the four below referenced the unsubstantiated citation claims.
- Donovan (2025:2): Perhaps even more distressing for authors has been the finding that the almost desperate effort to increase one’s citations plays out against research demonstrating that approximately 90% of published papers in academic journals are never cited by others, ever (Hamilton, 1990, 1991; Meho, 2007:32).
- Orbay et al. (2025:2): As the volume of daily published articles grows exponentially, citation numbers take on heightened significance as they provide a means to evaluate article quality. As a result of the citation analysis, it has been observed that 90% of the articles are not cited at all and 50% of them are not read except by the authors, editors, and reviewers (Meho, 2007).
- Antonovskiy (2023:249) The institution of network expert observers, which has emerged today and generates "citations", "reviews", "likes", "downloads", "readings", "recommendations" and "inclusions in follower groups", the corresponding " scientific indicators" of this or that scientist, do not notice or ignore most of the scientific product produced by the scientific organization. Thus, up to 80 percent of scientific texts do not receive any citations, and half are not honored and read. (Meho, 2007).
- Wassénius (2023:2769) Academics today have an ever-growing stream of peer-reviewed papers coming their way (Larsen and Ins 2010), and it is argued that many are never read (Meho 2007).
Moreover, as the abstract for the article only shows the erroneous claim, it is likely that this claim has lodged itself in many readers’ minds even if they didn’t actually cite the article and thus became part of the academic folklore, regularly regurgitated on social media and bandied about at conferences.
Remler 2014
Remler’s blogpost conclusively refuted the claim that 90% of academic articles are not cited. As it is a blogpost, her work is not listed in the Web of Science. However, citations from Web of Science listed publications to non-listed publications can be searched in the Web of Science Cited Reference function. As I found only 15 references to Remler’s post here, I verified them all. The result was quite shocking. Only four of the fifteen citing references (#2, #8, #10 and #15) correctly referenced her work and reproduced her finding that here is no evidence for the 90% uncitedness claim. One article used her blogpost to evidence a very generic discussion of the use of journal-level metrics for tenure and promotion decisions (#9).
More worryingly, six articles (#4, #6, #7, #11, #12 and #13) reference Remler in a way that is at least dubious. They mostly repeat the data provided by Larivière & Gingras (2007/2009) that Remler reports in her post but present them as if they are Remler’s own findings. They also still leave the reader under the impression that levels of uncitedness are very high. A further four articles (#1, #3, #5 and #14) go even further and actively promulgate the myth that Remler discards in her article, i.e. that most articles are not read or cited. A dire result indeed!
- Baldwin (2023:195) Already now, with vast and growing research output, complaints are heard that the average article is uncited, with the implication that it is also unread. (Remler, 2014)
- Baruch et al. (2022:1135) Aksnes and Sivertsen (2004, 214) suggest ‘that the majority of articles are seldom or never cited at all’. [AWH: there is no reference supporting this]. Are such claims based on valid evidence or have they become stylised facts – i.e. the so-called academic urban legends (Vandenberg 2006)? Alternative views suggest that the phenomenon is not prevalent and that there is scant evidence for such claims in different fields (for a summary of the debate, see Remler 2014).
- Spinks et al. (2020:4) While estimates vary widely on the proportion of unread or uncited studies, a great deal of quality, peer-reviewed research does not make it far beyond its original source of publication. (Biswas & Kircher, 2025, Remler, 2014)
- Green (2019:18) These numbers depend on the definition of ‘success’, but it is worth considering that 15% of recent papers remain uncited by anyone other than their authors (van Noorden, 2018), and non-citation rates range from 12 to 32% for the sciences and social sciences (Remler, 2014).
- Bin-Obaidellah & Al-Faghi (2019:4) According to Remler [2014] there are many academic articles published and never cited which leads them to be valueless.
- Marinetto (2018:1016) Dahlia Remler (2014) in her LSE impact blog questions the supporting evidence for these headline statistics, but she does not question the plain fact that many academic papers go unnoticed. Remler concedes that around a third of articles in the social sciences are left uncited.
- Al Gharbi (2018:500) Studies suggest that a majority of journal articles are likely never read by anyone other than their editors and authors (Eveleth 2014) [Note: this is a magazine article that refers back to the Meho article]. 82% of articles in the humanities – and roughly a third of all articles published in the social sciences – are never cited, even once (Remler 2014).
- Heinisch (2018:77) Even if claims that 90 percent of social science articles remain uncited and most are never even looked at are exaggerated (Remler, 2014), we all know that many publications are read—if at all—by only a handful of specialists.
- Servaes (2018:3) In Servaes (2014), we endorsed the so-called San Francisco Declaration on Research Assessment (DORA http://am.ascb.org/dora/ ), which recommends not to use “journal-based metrics, such as Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”. The discussion remains controversial (see, for instance, Lattier, 2016; Remler, 2014, or Verma, 2015).
- Amath et al. (2017:939) This finding that most of the articles examined in this study were cited runs counter to the perception amongst some academics that most papers are never cited (Cabrera, 2015). [AWH: this is a blogpost which states this without a reference or further details]. In fact, the ‘most papers are never cited’ notion has been mistakenly attributed, particularly on social media, to findings from a research study that does not exist, as discussed elsewhere. (Remler, 2014).
- Abulof (2017:508) Nearly all academic works eventually fall into oblivion, some instantaneously. One third of social science articles go utterly uncited (Remler 2014).
- Reisman (2017:97) Dahlia Remler, professor at the Baruch College School of Public Affairs, tried to find the facts behind the claim that only “90 percent of academic papers [are] really never cited.” She found that this statistic really applies to humanities publications. However, the nonclinical sciences’ uncited rate—nearly 27 percent—is still nothing for us to be proud of. My own favorite perspective on this topic is from the Academia Stack Exchange in a discussion on whether people are even reading journal articles. According to one of the posts there, “Most journal papers are read at most twice: … once by the author, and … once by the three referees” [Note: this is a variant on Meho].
- Mizrahi (2017: 358) As far as research impact is concerned, while 12% of medicine articles, 27% of natural science articles, and 32% of social science articles are never cited, 82% of humanities articles are never cited (Remler 2014).
- Jackson (2016:256) The average article, within the myriad of articles and journal, is only read by a small handful of people (Remler, 2014), so we may find a better dissemination tool in the Internet.
- Powell (2016:650) However, Remler (2014) points out that no evidence is produced for this assertion, and she found another study that suggested that non-citation rates vary enormously by field, with 32 per cent of social sciences articles uncited.
The case for the defence (of Science): most articles are cited, but even they are not they have impact
We have seen that our witnesses for the prosecution weren’t exactly expert witnesses; they were in fact journalists writing largely for effect. So, let’s call the witnesses for the defence of Science and see whether they can do better. Their defence will take three avenues: situational (uncitedness depends on a wide range of factors), empirical (recent studies show uncitedness is very low), and fundamental (uncitedness doesn’t equate to a lack of worth).
Exhibit #3: Uncitedness depends on a wide range of factors
Our first exhibit for the defence of Science looks at the many situational factors that impact on citations and thus are essential for interpreting the level of uncitedness. Here we briefly discuss eight factors that impact on the level of uncitedness: document type, citation speed, timing of data collection, data source, number of co-authors, publication language, citation accuracy, and discipline.
Document type: comparing apples and oranges
The data Hamilton used covered not just academic articles, but also many other document types, including not only editorials, notes and letters, but also obituaries, meeting abstracts, book or resources reviews, retractions, and other “marginalia”, that are not generally cited. Hence, if we want to study uncitedness we should focus on the type of documents that are intended to be built upon by future researchers, i.e. full-length journal articles. Pendlebury’s (1991) response letter to Hamilton explained that whilst in the Sciences these marginalia made up only 27% of the items indexed, in the Social Sciences and Humanities they made up 48% and 69% respectively.
Possibly even more importantly, different publication types dominate in different disciplines. Whereas journal articles are at the top of the publication hierarchy in the Sciences, conference papers are very important in some of the Engineering disciplines, and books still rule in the Humanities and some of the Social Sciences (see also: Own your place in the world by writing a book). As the Web of Science has limited coverage of non-journal publications and had even lower coverage of them four decades ago, Pendlebury’s data thus didn’t include the most important publications in some of the disciplines.
Hence, both overall levels of uncitedness and disciplinary differences in uncitedness depend on correctly identifying document types that can expected to be cited and thus excluding “marginalia”.
Time: citation speed differs by discipline
Time is a huge factor in citation frequency. Given enough time most articles will eventually be cited. However, most citation analyses have a relatively short “citation window” that is tailored towards fast-moving disciplines such as micro-biology or bio-physics and incorporate only citations in the 4-5 years (or even just 2 years) following publication. Short citation windows are appropriate for disciplines characterised by a pattern of frequent publication of relatively short data-oriented articles, combined with fast review cycles. However, articles in the Social Sciences and Humanities typically are much longer, less formulaic, and more oriented towards theory and/or narrative, and thus take much more time to write. They also spend much more time in the review process. A blogpost on the Scholarly Kitchen reports on a comparative study showed that Business & Management had one of the longest reviewing cycles (see Figure 1 below).
Figure 1: Comparison of review times (in days) between disciplines [source Scholarly Kitchen blog]
Remember that this study only looked at the review cycle of submission to a single journal. In Business & Management it is not uncommon for academics to see their articles rejected from multiple journals before finally being published. Hence, the total journey from first version to publication might easily be 4-5 years or even longer (see also: CYGNA: Our 4th Christmas meeting - failure & fun for a story of a 10-year journey). Some authors might be diligently updating their references during this long journey, but others might not. In any case, it is likely that the authors’ coverage of the literature published at the start of their publication journey is more comprehensive than their coverage at the end of their journey.
Hence, both overall levels of uncitedness and disciplinary differences in uncitedness depend on the length of the citation window that is used, with citation windows of 5-10 years resulting in much lower levels of uncitedness than citation windows of 1-4 years.
Time: uncitedness has declined over time
The second element of time refers to the level of uncitedness in different eras. More recent studies have generally found much lower levels of uncitedness. A large part of this was simply using better methods, such as excluding marginalia. However, another factor is the easier availability of literature over time.
Early in my career – in the early/mid 1990s – keeping up to date with the new literature required browsing physical copies of journal table of contents circulated across the department in pale yellow manilla folders. They took months to reach academics at the bottom of the hierarchy, i.e. me :-). Tracing back the literature to find whether a specific article had been cited by others involved going through the paper version of the (Social) Science Citation Index. Even though this publication had paper so thin you could almost see through it, was incredibly heavy. So, you would risk a hernia from carrying around heavy stacks of books.
Getting access to the actual publications required a time-consuming trip to the library, unearthing the correct bound volume in dark cellars and subjecting yourself to the chemical smell of the thermal photo-copying paper (see also: How to keep up-to-date with the literature, but avoid information overload?). That is, if you were lucky enough to find a photocopier that wasn’t left unusable by its previous user. I have lost count of the number of times I had to carefully remove stuck paper from the machine’s insides.
And that assumed that your library even had access to the journal in question. If not, you had to rely on (very slow) interlibrary loans or travel to other libraries for publications that weren’t covered by the service. No wonder academics stuck with citing the few articles they happened to have on their desks anyway. In contrast, these days most articles are only a few clicks aways in online journal databases. Moreover, increasing open access availability means we can even get access to articles in journals that our library doesn’t subscribe to. Hence, longer and more comprehensive reference lists have become the norm.
Data from a study by Wallace et al. (2009) – visualized by van Noorden (2017) (see Figure 2) – shows this was already happening between 1980 and 2005. However, rapidly increasing levels of open access in the past two decades may have accelerated this trend, especially in Europe where many funders require open access availability of outputs. Above, we already cited recent publications that demonstrated very low levels of uncitedness – ranging from 1.05% to 6.5% – for research in Business & Management for articles published between 2005 and 2013 cited up to 2016 (Baruch et al. 2022), and articles published between 2001-2017 cited up to 2022 (Dias et al. 2023). Later in this white paper, I will present my own empirical study looking at citations in 2021-2025 to articles published in Management in 2020 as Exhibit #4.
Figure 2: Declining levels of uncitedness between 1980 and 2005
Thus, overall levels of uncitedness depend significantly on the timing of data collection. Hence, using studies based on data that are 35-45 years old to evidence current-day levels of uncitedness is inappropriate.
Data sources: some are more “inclusive” than others
Hamilton’s data were based on the Web of Science. This is perfectly understandable. In 1990 the Web of Science – then usually referred to as ISI, i.e. Institute for Scientific Information, later known as Thomson ISI under ownership of Thomson Reuters, and re-established in 2018 by Clarivate – was “the only game in town”. However, the subsequent decades saw the launch of many additional data sources, most importantly Scopus (1996) and Google Scholar (2004). The latter two data-bases typically have a more comprehensive coverage of publications in the Social Sciences and Humanities. Two of my own studies illustrate this.
- Harzing, A.W. (2013) A preliminary test of Google Scholar as a source for citation data: A longitudinal study of Nobel Prize winners, Scientometrics, 93(3): 1057-1075. Available online... - Publisher's version (read for free)
- Harzing, A.W.; Alakangas, S. (2016) Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison, Scientometrics, 106(2): 787-804. Available online... - Publisher's version (read for free)
In Harzing (2013), I investigated the citation records of 20 Nobel Prize winners, five each in Chemistry, Economics, Medicine, and Physics. Whilst in Chemistry, Medicine and Physics Nobel Prize winners didn’t not show a significant difference in citation levels between data sources, Nobel Prize winners in Economics did (see Figure 3).
Figure 3: Average total number of citations in Google Scholar and the Web of Science for Nobel prize winners in four disciplines*
*Note: the lower level of GS citations for Chemistry was due to a lack of GS access for some publishers important in Chemistry; GS resolved this soon after my study.
The vast difference between Web of Science and Google Scholar citations for Nobelists in Economics is largely due to their book publications. Books are a very significant part of research outputs in the Social Sciences and Humanities and Google Scholar captures book citations much more comprehensively than Scopus and the Web of Science. The top 10 publications tor Nobel Prize winners in the Sciences and Medicine were nearly always journal articles, often published in prestigious journals such as Nature, Science, Cell, The Lancet, and Physical Review. Four out of the five Economist Nobel prize winners, however, had books amongst their top-10 publications; all four had a book in their top-3 most cited publications.
In Harzing & Alakangas (2016) – a study based on a carefully matched group of 146 Professors and Associate Professors at the University of Melbourne – we showed that although the Life Sciences and Sciences had higher levels of citations than the other disciplines in each of the three data bases we studied, the differences were much smaller in Google Scholar. In the Web of Science, Humanities and Social Science scholars only had 2% and 19% of the citations of Life Science scholars, whereas for Google Scholar this was 19% and 55% (see Figure 4).
Figure 4: Average number of citations per academic for five different disciplines in three different databases
Hence, both overall levels of uncitedness and disciplinary differences of uncitedness differ significantly depending on the data sources that are used to gather citation data.
Number of co-authors: the more authors the more citations
The number of co-authors in an article is generally positively correlated with the number of citations. This makes perfect sense as the more authors an article has, the higher the chance that one of the authors cites the article in their future work. Moreover, as every author has their own collegial network, the number of academics becoming aware of the article in question through personal recommendations increases significantly with the number of co-authors. It is almost impossible for an article with more than 10 authors to remain uncited; in my 2016 study some articles – e.g. in particle physics – had over one hundred co-authors.
The number of co-authors varies significantly by discipline. Single-authorship is the norm in most of the Humanities and some of the Social Sciences, but is very rare in the lab-based Sciences. [Note: this is the most likely explanation for the “surprisingly” low level of uncitedness in Social Psychology reported in Hamilton 1991, as Psychology is one of the few lab-based Social Sciences].
To correct for the impact of co-authors on citations (and for different career lengths), I created the hI,annual, an annualised individual h-index that corrects citations for the number of co-authors. Based on Scopus data, Figure 5 below shows that whilst differences in h-index between disciplines are large, these differences all but disappear when using the hIa index. Only the Humanities still differ significantly from the other disciplines.
- Harzing, A.W.; Alakangas, S.; Adams, D. (2014) hIa: An individual annual h-index to accommodate disciplinary and career length differences, Scientometrics, 99(3): 811-821. Available online... - Publisher’s version
Figure 5: h-index compared with hIa index for different disciplines
The combined effect of different data sources and the number of co-authors was demonstrated in my 2016 study (see also my white paper: Citation analysis across disciplines: The impact of different data sources and citation metrics). More specifically, we found that when using the h-index as a metric and the Web of Science as a data source, (Life) Science academics significantly outperformed their counterparts in Engineering, the Social Sciences and Humanities. However, when using the hI,annual and Google Scholar or Scopus as a data source, Life Science, Science, Engineering and Social Science academics all showed a very similar level of research performance; whereas even the average Humanities academic had a hI,annual that was half to two thirds as high as the other disciplines (see Figure 6 below).
Figure 6: Average hIa per academic for five different disciplines in three different databases
Hence, both overall levels of uncitedness and disciplinary differences of uncitedness differ significantly depending on the average number of authors for the publications included in the data-set.
Publication language: English rules
Over the years, English has become the dominant language of Science. This makes sense in disciplines that largely deal with universal principles such as the Life Sciences, Sciences, and Engineering. However, research in the Social Sciences and Humanities is often intimately entwined with local contexts. As such, many academics prefer to publish in their local languages in order to reach their intended audiences.
However, the Web of Science traditionally included very few non-English language journals or even journals published outside North America and the UK. This has changed in recent years (see also: An Australian "productivity boom"? ... or maybe just a database expansion?), but even so coverage in languages other than English (LOTE) is still relatively low. This means that many of the publications of academics in the Social Sciences and Humanities are not covered in the Web of Science. However, even more importantly for our discussion, many of the citations to their work – which are more likely to occur in LOTE journals – are not included in the Web of Science.
My earlier white paper (Do Google Scholar, Scopus and the Web of Science speak your language?) compared top scholars at the University of Melbourne with six research-active French, German, and Brazilian scholars. It showed that these non-Anglophone scholars – who had very respectable publication records in Google Scholar – were almost completely invisible in Scopus and the Web of Science (see Figure 7). Admittedly, it was based on a very small sample, but the wider principle is still valid.
Hence, both overall levels of uncitedness and disciplinary differences of uncitedness differ significantly depending on the customary language of publication of the disciplines involved.
Figure 7: Comparing the average h-index of Anglophone and non-Anglophone scholars across three data-bases
Citation accuracy: errors translate into lost citations
Although the Web of Science is a high-quality database with high levels of quality control, this doesn’t mean that all citations are necessarily captured accurately. I have written up quite a few blogposts on major bloopers in this respect. Here are two:
- Bank error in your favour? How to gain 3,000 citations in a week
- Web of Science: How to be robbed of 10 years of citations in one week!
Granted, most of these errors occur for non-traditional publications – i.e. the type of non-journal publications that are more common in the Social Sciences and Humanities. These publications might not have a DOI and thus do not always have a standard way of referencing them. Thus, academics might reference them in a wide range of ways, and the data source in question might not always be able to aggregate citations accurately.
Moreover, with journal articles now appearing in online first before print publication, early references to journal articles are not always merged properly with the later master record either, especially when the online first publication year differs from the print publication year (see Figure 8 below). These are often called stray citations.
Figure 8:”Stray” citations in the Web of Science
I used to send correction reports to Thomson Reuters/Clarivate nearly every week. Now that I am retired, I only do so monthly. But not a month passes without the need for corrections. Again, most of these corrections are for non-traditional publication such as the Publish or Perish software and my Journal Quality list, but there are nearly always some necessary corrections for journal articles too.
Hence, both overall levels of uncitedness and disciplinary differences of uncitedness differ significantly based on the accuracy of the data sources that are used to gather citation data.
Discipline: accumulation of all prior factors leads to large variance
All of the above characteristics vary by discipline, with the Natural and Life Sciences being at one end of the spectrum – a focus on journal articles, short citation windows, faster publication, good Web of Science coverage, more co-authors, English-language publishing, fewer citation errors – and the Social Sciences and Humanities on the other end – broader range of publication outlets, longer citation windows, slower publication, poorer Web of Science coverage, fewer co-authors, LOTE publishing, and more citation errors. In addition, citation practices and the length of reference lists often differ by discipline. Humanities scholars in particular do not use citations in the same way as their research is not cumulative. They also typically use far fewer references. However, even within the Sciences there are significant differences as we will see below in our example of biogeography.
Exhibit #4: Empirical verification: Management articles in the 2020s are cited
Wallace et al. (2009:858) state that the claim ‘that most articles are never cited [is] a common lore that comes back periodically in the literature’ (p. 858). Their own study shows that most scientific articles are in fact cited, and that rates of ‘citedness’ are steadily increasing (see our section “Time: uncitedness has declined over time” above). This was confirmed for Business & Management with Baruch et al. (2022) and Dias et al. (2023) providing estimates that – when marginalia were excluded – uncitedness levels remained well below 5% and approached 1% with stricter criteria. To establish the most recent state of affairs, I set out to conduct my own empirical verification with the most recent Web of Science data.
Exhibit #4, my second exhibit for the defence of Science, is therefore a dedicated empirical study to assess the level of uncitedness in the field of Management, my own discipline. I focused on articles listed in the WoS published in 2020, looking at citations to these articles in the 2021-2025, i.e. replicating the analyses conducted in Hamilton (1990) for a specific discipline. My initial plan was to study two general management journals in each of the four journal ranks provided by the Chartered Association of Business School’s ranking, a journal ranking that is used in most UK and many European Business Schools. This was because I expected lower ranked journals to have higher levels of uncitedness.
However, when this turned out to not be the case and journals of all ranks showed (virtually) no uncitedness, I expanded my analysis to all journal publications that the WoS had classified in the category Management in 2020. As this still included a lot of Engineering journals such as IEE Systems Journals and Expert Systems with Applications and other non-Management journals, I applied a secondary filtering for Citation topics “Management”. Even this still left many publications not in Management. Hence, I applied a tertiary filter, filtering out all articles that were double-badged with other disciplines. Finally, I removed books and book chapters, editorials without substantive content, corrigenda, retracted articles, call for papers, and book reviews.
Table 1: Level of uncitedness for publications published in eight specific Management journals in 2020 as well as all Management articles published in 2020
Table 1 shows the results of this analysis: all but one of the 370 articles published in eight specific Management journals were cited, for an overall level of uncitedness of 0.27%. As expected, articles in higher ranked journals were on average more highly cited, although the European Management Journal and the Australian Journal of Management – both ranked 2* – show an average number of citations per article that exceeds the two 3* journals and – in the case of AJM – even the two 4* journals. That said, the latter was caused by two extremely highly cited articles, the two most highly cited of all 370 articles, dealing with generic topics such as systematic literature reviews and common method bias. This illustrates both the skewness of citation data and the fact that highly cited articles can be published in lower ranked journals.
Just over 4,000 articles were listed in the Web of Science as published in Management in 2020. Of these, 134 had not been cited by 22 Sept 2025, constituting an overall uncitedness level of 3.3%. However, despite being classified as articles, some of these publications were very short and thus atypical of Management publications. Hence, I excluded all publications with fewer than 5 pages, leaving me with 115 uncited articles for an uncitedness level of 2.88%.
Most of these 115 uncited articles appeared in journals published in non-English speaking countries, either by non-mainstream publishers or by university presses; 36 uncited articles were published in Betriebswirtschaftliche Forschung und Praxis or Scientific Papers of the University of Pardubice, Series D: Faculty of Economics and Administration alone. Overall, only 41 articles published in journals in English-speaking countries remained uncited by 22 Sept 2025, leaving us with an uncitedness ratio of 1.14%.
Of these 41 articles, a quarter were published in Organizational Dynamics, which is not a traditional academic journal; it publishes articles that report research findings in a language and style engaging to practitioners. Its articles do not include formal referencing in the text, though they do include a selected bibliography for further reading. The remaining 31 articles were published in 23 different journals. However, all but six of the 41 articles published in English-speaking countries were in fact cited in Google Scholar, with half having 3+ citations. Thus, if we consider only articles of 5+ pages published in English-speaking countries and use Google Scholar as a data source, we are left with us with an uncitedness ratio of 0.17%.
Hence, regardless of whether we use the most stringent (0.17%) or the most generous (3.30%) uncitedness ratio, my study shows that the level of uncited in the discipline of Management is very low indeed.
Exhibit #5: Uncitedness doesn’t mean lack of impact (or worth)
Exhibit #5, my third exhibit for the defence of Science, questions the wisdom of equating citations with impact and by implication the worth of academic research. When commenting on Hamilton’s article, both Garfield, the grandfather of citation analysis the and founder of the ISI, and Pendlebury, the analyst providing the data for Hamilton’s paper, concluded:
But the fact that a paper remains uncited is not necessarily a true indication of its worth. While everyone likes to be cited and citation analysis remains a critical tool for observing the progress of science—not to be cited is no great shame. (Garfield, 1991:391)
“Lack of citation cannot be interpreted as meaning articles are useless or valueless,” says David Pendlebury, a senior citations analyst at Clarivate. (Van Noorden, 2017:162)
Hence, our third exhibit for the defence of Science is of a more fundamental nature. The implicit or explicit assumption of those “prosecuting” Science for its high level of uncitedness is that a lack of citations always means that the research in question has (had) no impact. Although this might be true in some cases, there are many reasons why it might be false.
First, some studies were never meant to be cited (van Noorden, 2017). Engineering reports, for instance, are often written to solve a specific problem, not to form the basis for future research. Lab protocols in the Life Sciences serve a similar function, with other groups downloading, viewing, and following the protocols, but not citing them. Moreover, MacRoberts & MacRoberts (2010) show that the accepted practice for the discipline of bio-geography is for authors to provide substantial information about the database they use, but not to cite them.
My own Publish or Perish software is a case in point. Although to date more than 10,000 articles mention the software in their articles, only just over 2,000 included a formal reference to it in their list of references. And most likely even this modest level of citations only occurred because I explicitly requested users to reference it if they used it in their research and showed them exactly how to do so on the download page. Other tools might not be so lucky and might not acquire any citations even though they were essential for the research in question.
Other studies are published in what – in Business & Management – we call “bridge journals”, journals that aim to bring academic research to a practitioner audience and do not in fact contain references themselves. Although academics may appreciate these lay summaries of academic research, they are less likely to cite them. A case in point is my article with Shea Fan in Organizational Dynamics, a journal which we discussed in Exhibit #4 – The double-edged sword of ethnic similarity for expatriates. I believe it is the most interesting article coming out of her PhD, but it is the least cited. However, it did strike a chord with the practitioner reviewer.
“After reading this article, when I find myself in these situations, I can confidently say what I am experiencing is either identity ambiguity or not conforming to people's mental expectations. I can clearly articulate whether these experiences are manifesting as double standards, social comparison, or expectations of in-group favoritism, and this article clearly provides tools (strategies) for dealing with the issue.”
Second, some articles are not cited because, soon after publication, they were incorporated into encyclopaedia, handbooks, or other reference works such as atlases or online databases, which constitute a more prestigious or definitive reference. Van Noorden (2017) mentions the case of the discovery of nodding club-moss which was recorded in plant atlases and online databases but not cited until its author used it as an example on the shortcomings of citation analysis.
Third, other articles were never meant to be cited because they close down research streams, showing that particular experiments are not worth doing or that particular avenues of research are no longer useful. Thus, their impact is significant even if citations are absent. On a related note, some articles based on an author’s PhD simply constitute evidence of successfully completing an apprenticeship rather than constituting a citable paper. A special case of this is mentioned by Garfield (1991) who refers to short articles published by PhDs who take jobs in industry.
Hence, we end this section with two extended quotes from two researchers who have conducted a range of studies on uncitedness (all referenced in MacRobert & MacRoberts, 2010) and found that authors cite only a fraction of their influences. In their 2010 article they focus on biogeography and conclude:
“To determine influences on the production of a scientific article, the content of the article must be studied. We examined articles in biogeography and found that most of the influence is not cited, specific types of articles that are influential are cited while other types of that also are influential are not cited, and work that is “uncited” and “seldom cited” is used extensively.” (MacRobert and MacRoberts, 2010:1)
“Citations are just a thin . . . band, sandwiched between the rock of eons. And it is this highly limited, highly unrepresentative, yet alluringly available band of rock that the ISI has fetishized and turned into a highly desirable and marketable commodity (Hicks & Potter, 1991, p. 483)”. MacRobert and MacRoberts, 2010:7)
The verdict: click-bait was transformed into “facts” through poor academic referencing practices
So, what’s the judge’s verdict, or the jury’s verdict – depending on the legal system of the country in which the case is tried. How do Exhibits #1 & #2 weigh up against Exhibits #3-5? The case for the prosecution relied on two short news items (Hamilton, 1990/1991) and a few sentences in Meho (2007), that were either based on inappropriate data (Hamilton), or pure gossip (Meho), and were written up as journalistic click-bait. In contrast, the case for the defence showed – with consistent, abundant, and rigorous evidence – that there is absolutely no ground for concluding that uncitedness is a major problem in academic research. Hence, just like I demonstrated in 1995 for expatriate failure rates (see also: What's the story behind your first paper? ), we can label the existence of high levels of uncitedness a myth.
Development of myth creation over time
So why then did this myth travel so readily and so widely? It is not like others haven’t tried to debunk it before. Pendlebury tried to set the record straight immediately after publication of the second Hamilton article. He explained that although strictly speaking the data were correct, the inclusion of marginalia in the sample prevented any sound conclusions about uncitedness. His article concluded with:
We hope this information clarifies the record and will end further misunderstanding or politicalization of these statistics.
Likewise, Eugene Garfield (1998), the founder of the Science Citation Index groaned about citation errors being perpetuated year after year in an article evocatively entitled “I have a dream … about uncitedness”. His article contains a long quotation that is worth citing in full, as it contains lessons of crucial importance for academics even 70 years on.
My first paper proposing the creation of the Science Citation Index® (Science, 122(3159): 108-111, 1955) began with a quotation from P. Thomasson and J.C. Stanley: "The uncritical citation of disputed data by a writer, whether it be deliberate or not, is a serious matter. Of course, knowingly propagandizing unsubstantiated claims is particularly abhorrent, but just as many naïve students may be swayed by unfounded assertions presented by a writer who is unaware of the criticisms. Buried in scholarly journals, critical notes are increasingly likely to be overlooked with the passage of time, while the studies to which they pertain, having been reported more widely, are apt to be rediscovered."
Dahlia Remler (2014) deserves a price for investigative blogging, and if you didn’t laugh, you’d cry for Science at her discovery that the “90% not cited, 50% not read” statement was plucked purely out of thin air.
“The first paragraph of the article was written by the editor of the magazine and not me. If I recall correctly, he got the figures from/during a lecture he attended in the UK in which the presenter claimed that 90% of the published papers goes uncited and 50% remains unread other than by authors/referees/editors.”
One would assume that the evidence of the chief data analyst and the founder of the – back then – only source for citation data – the Web of Science – and an investigative professor at the City University of New York would outweigh the “evidence” of two journalists who were simply writing for effect. But unfortunately, this was not to be. Figure 9 compares the number of citations to the three articles by Hamilton and Meho – i.e. those promulgating the myth of high uncitedness – to the three articles by Pendlebury, Garfield and Remler – i.e. those debunking the myth of high uncitedness.
Figure 9: Comparing citations to articles promulgating and debunking the myth of high uncitedness
Granted, we discovered – based on our “most recent 10 citations” sampling – that not all of the references to the Hamilton articles replicated the erroneous claim, and that 6 out of the 10 references to the Meho article didn’t refer to the 90% uncited claim, instead referencing the part of the article that was actually written by Meho. But then again many of the referencing to Pendlebury, Garfield and Remler weren’t correctly replicating their cautionary notes either. For Pendlebury’s article only four out of the ten most recent citing articles replicated the articles message correctly. The results for latter two articles – for which I traced all citing articles – were being particularly bad as only 9.6% (5 out of 52) and 27% (4 out of 15) of the referring articles replicated Garfield’s and Remler’s critical reflections accurately.
Thus overall, the click-bait claims of high uncitedness by two journalists acquired far more exposure than the cautionary notes of three experts. As Thomassen & Stanley already alluded to 70 years ago, spectacular findings – even those that are urban myths – are clearly more citable than caveats. Overall, citations to the three myth busting articles amounted to only a quarter of those to the three myth promulgation articles and after gradually declining between 1991 and 2005, net myth creation has been sustained at a high level in the last 20 years.
Twelve guidelines for good academic referencing
Of course, journalists cannot be expected to take the accuracy of information as seriously as academics. Yes, we can question whether Hamilton did justice to the data that Pendlebury delivered. Yes, we can scold the journalist who reported hearsay at a talk as a fact that led to two decades of claims that 90% of our articles in academic journals are uncited. However, I think the bigger problem if fact lies with us as academics. Through citing these factoids in our own academic publications, we not only perpetuated the myth of high uncitedness but gave it our “stamp of approval”. In 2002, I published an article that suggested twelve guidelines for good academic referencing (see Table 2 below). Below I analyse how the myth of high levels of uncitedness was created by repeated violations of at least seven of these guidelines (see also: How to publish an unusual paper? Referencing errors, scholarship & credibility).
Table 2: Twelve guidelines for good academic referencing
I have not been able to systematically investigate violations against guidelines #1 and #2, guidelines that caution academics to reproduce the correct reference in their reference list and be careful in attributing the right statement to the right publication, even if the wrongly referenced publication was published by the same academic. However, these violations are relatively minor and do not typically have a major impact on myth creation.
Guideline #3 cautions us to not use “empty” references, i.e. reference publications that refer to another publication for their empirical evidence as if they are providing this evidence themselves. To properly investigate this, I would need to go through the entire citation network, as I did for expatriate failure in Harzing (1995, 2002). However, there are some 550 references to the three source articles of the high uncitedness myth in the Web of Science alone and well over 1,000 in Google Scholar. These 550 references were in turn cited a total of more 22,000 times themselves.
Hence, reviewing this entire network would take weeks if not months and is unfeasible for a white paper. However, given that in my earlier case study on expatriate failure nearly two thirds of the references were empty references, I consider likely that empty referencing played a role in perpetuating the myth of high uncitedness too. As I explained in Harzing (2002) empty referencing can also “update” a claim by referencing a later article that references the original piece. Again, I cannot claim with certainty that this has happened in the myth promulgation of uncitedness, but I consider it likely.
What I can be certain about it that everyone citing the Meho article as evidence of extremely high levels of uncitedness violated guideline #4 that asks us to use reliable sources. There wasn’t a shred of evidence for this in the article. Hamilton’s articles did contain evidence, but as they were short news pieces, there was no detail on the methods used and the analytical choices made. Academics should know better than to cite a 1–2-page news article as evidence.
Guideline #5 – to use generalisable sources for generalised statements – was again violated by everyone citing the Meho article; there was no evidence, so it cannot be generalizable either. Arguably, the Hamilton articles are generalizable as they were based on a very large and cross-disciplinary data set. Unfortunately, in this case disaggregation of the large data set would have presented a more reliable result. However, given that the ISI analyst – David Pendlebury – didn’t provide disaggregated data we cannot really blame Hamilton or the citing authors for this.
Guideline #6 calls on us to not misrepresent the content of the reference, something that usually requires reading the article in full. As my small-scale analysis of 30 articles citing the sources of the myth of high uncitedness above demonstrated, most citing authors appear to represent the content of the referred work correctly. That said, in quite a few cases the “not cited within 5 years of publication” morphed into “never cited” or “not cited, ever”. Some referring articles also use rather vague terms such as “has little influence”, “is forgotten”, “is little used” instead of referencing the phenomenon of uncitedness, a much more specific and narrow definition. Thus, violation of guideline #6 may have played a role in creating the myth of high uncitedness by leaving out Hamilton’s only caveat (within 5 years of publication) and jumping from reporting uncitedness to claiming the lack of value of the publications, though admittedly this is something Hamilton himself does too.
However, more serious is the considerable violation of guideline #6 in the articles citing the three articles debunking the myth of high uncitedness. As we discussed above most of the articles citing them reproduced their key message incorrectly; 60% of the citations to Pendlebury were at least open to misinterpretation and often plain wrong, and this was true for 90% for the citations to Garfield and 73% to Remler respectively. In some cases, the articles debunking the myth of high uncitedness were even cited as evidence of high uncitedness. This was particularly problematic for Remler blogpost where no less than a quarter of the citing references did so.
Some articles used strings of references to substantiate a number of different statements, and thus nominally violated guideline #7, but I have no reason to believe this played a role in myth creation. Assessing violation of guideline #8 – copying someone else’s references – is difficult as – unlike in the expatriate failure citation network where mistakes in cited figures were repeated by later publications – we cannot establish with certainty whether authors have or have not checked the original sources. However, given that I encountered the exact same (part) sentence in quite a few articles, I do consider it likely that in at least some cases the authors simply copied someone else’s references.
Given that uncitedness has changed over time, violation of guideline #9, not to cite out-of-date references, is particularly important in this citation network. Surely, I am not the only one realising that getting access to literature is easier now than it was 35-45 years ago, i.e. before the internet and online journals existed? Hence, referencing articles that were published in 1990/1991 or even in 2007 – dealing with citations to articles published 40-45 years ago – to substantiate high levels of uncitedness in the 2020s isn’t good academic referencing practice.
Violation of guideline #10 most likely played a very significant role. Hamilton’s articles were published in Science, a journal that is widely considered to be one of the very top academic outlets even by academics outside the Life Sciences and Natural Sciences. Hence, many citing authors are likely to have overlooked the fact that these were not actually academic articles that had passed the Science rigorous peer review process, but in fact two short news articles (especially if they also violated guideline #8 and thus didn’t look up the article itself).
Physics World – the outlet of Meho’s article – might not have the same ring as Science but given that Physics is seen by many as one of the most precise, mathematically rigorous, and repeatable of disciplines (see image above) some of the of the outlet’s name will have “rubbed off” on the perceived veracity of the claim. I am willing to wager a bet that the same claim published in a Business & Management journal would have been discarded as nonsense.
Guideline #11 asks academics to report conflicting evidence accurately rather than to try to reason it away. This has generally not been a big issue in the uncitedness citation network. However, some authors did try to reconcile the extremely high levels of uncitedness reported by Hamilton and Meho with more recent lower levels by assuming that levels of uncitedness have dropped considerably. This is obviously true, for reasons discussed under guideline #9, but as we have seen even in 1990/1991/2007 they were nowhere near as high as the referencing authors claimed.
Guideline #12 suggest to actively search for counterevidence, which seems to have happened very rarely in related to the myth of high levels of uncitedness. I find it remarkable how many academics simply reported these high levels of uncitedness without hearing loud alarm bells ringing in their heads. Sure, for many authors these statements simply presented background information in their article, but even then… How can anyone who has ever looked up their own or someone else’s citations believe that 90% of the articles do not get cited?
Maybe academics outside the discipline of Library and Information Studies weren’t yet that bothered with citations in the 1990s. But surely after the introduction of Google Scholar in 2004 and especially after the popularization of the h-index (see also From h-index to hIa: The ins and outs of research metrics) and the introduction of Google Scholar Citation Profiles in 2012 (see also Who creates Google Scholar Profiles?), academics are at least somewhat familiar with their own and others’ citation levels?
Moreover, paraphrasing Hofstede (1998) in his discussion of my 1995 article on expatriate failure rates (reported in Harzing, 2002): … academics aren’t imbeciles. Does anybody really think that academics would have continued to publish articles if only one in ten articles ever got picked up by other academics? Well actually…, maybe they would do if their performance targets were publication quantity rather than research impact, as they have traditionally been in many universities (see Sathish & Harzing, 2025). Even so, it would paint a rather sorry state of our profession if we were all prepared to publish work that we knew nobody would ever cite.
In sum
There is no evidence that levels of uncitedness were ever a major problem in academic publishing, and they certainly are not now. However, the violation of guidelines for good academic referencing – especially guidelines #3, #4, #5, #6, #9, #10 and #12 – transformed a click-baity amusing factoid into a “fact” that appears to still be firmly lodged into many academics’ mindsets. So why is this myth so stubborn and why do some of us keep mindlessly citing something that even the most basic common sense would immediately disprove.
Why do we as academics keep believing the myth?
Above we suggested that the main problem in creating the urban myth of uncitedness was the collective irresponsibility of the academic community that seemed to be more than happy to cite and/or believe this myth. So why did we do so? Why did we cite and/or believe a claim that should have been clear was abject nonsense that wouldn’t survive even the merest of empirical verifications? We wouldn’t do this when seeing journalists produce click-bait on climate change, vaccination, or migration, would we? So why did we do so on a topic that we should know so much more about? I suggest there may be three reasons for this: it makes us feel good, it motivates our research, and we are only human too.
It makes us feel good
There are at least two distinct reasons why believing and reproducing the statistic of high uncitedness might make us feel good. First, it allows us to complain about our profession, and if there is one thing that academics love it is complaining about our profession! It appears to be one of our favourite pastimes, and we seem to get a sense of satisfaction from sharing and re-sharing horror stories. Part of this may simply be an “embattled hero narrative” in the sense that we can feel good about being survivors in such a system. But it is also worth repeating Rekdal’s quote that I used at the start of this white paper. Urban legends are part of our “social communicative repertoire” and build bridges that make us feel part of our community. They are an important part of our academic identities.
“Urban legends are not just fascinating, entertaining, and colorful stories; they are also a part of our social communicative repertoire. When we lose them, we have to find other things to talk about, which are most likely not as funny, engaging, or bridge building as urban legends” (Rekdal, 2014a: 649).
However, there is also a second, more self-centred, reason for why this statistic makes us feel good. Virtually every academic has published many articles that are cited. Most academics also have a reasonable number of articles that have drawn a double-digit number of citations. Finally, there are quite a few academics who have some articles that are (very) highly cited. So, if the benchmark is that most articles are not cited at all, and our own work seems to be unique in that it is cited, surely that must mean we are geniuses? And in an academic world that is characterised by constant critique and rejection, who wouldn’t want to feel like a genius for once, even if this can only be sustained by not questioning uncitedness data?
It motivates our research
In my article on the myth of high expatriate failure rates (Harzing, 2002:141), I argued:
Another influence inhibiting attempts to question the myth and thus reinforcing it is that high EFRs are convenient for most academics and practitioners in the field. Many authors in the EFR citation network present models and research recommending careful expatriate selection and training. Reporting high EFRs conveniently establishes the relevance of the authors’ research and recommendations, since high rates of expatriate failure are usually assumed to be due to poor selection and/or training and therefore subsequent lack of adjustment. [Note: I am not suggesting here that academics or practitioners are deliberately perpetuating a myth they know to be wrong. The fact that the myth is convenient for both academics and practitioners, however, means that they will not be likely to question it.]
The same is true for high levels of uncitedness, which serves a definite purpose for motivating not only research on the topic of uncitedness, but also for research on citedness. What better way to justify research on the factors leading to high levels of citations than placing it in the context that most research doesn’t get cited. The Hamilton claim has thus spawned a whole cottage industry of research investigating not only the determinant of uncitedness, but also of citedness, of publications. This included the impact of document types, discipline, data sources, and time elapsed since publication. However, it also descended into trivialities such as whether a colon or a question mark in the title or the length of the title impacted on the level of citations.
And the wonderful thing about this kind of research is that this kind of research can be done by almost any researcher in any discipline. Although there is lots of large-scale rigorous research in this area that is conducted by academics in Library and Information Systems, the field that traditionally conducts research in this area, many other academics have taken to doing bibliometric research in their own discipline. I should know as I have done so myself advocating for more inclusive data sources and metrics that acknowledge the Social Sciences and Humanities traditions; many of these articles are even among my most-cited work. Most journals have also at some time published articles about the most-cited articles in their discipline. These articles were typically good news for authors – as they were easy and quick to write – and for editors/journals – as they were typically highly cited. So, everyone was happy. Obviously apart from the research funders and the general public…
High levels of uncitedness were also convenient for librarians and other information literacy advisors who tried to get academics interested in learning more about bibliometrics. What better way to get full courses than to inform academics that most research isn’t cited. Finally, high levels of uncitedness were good news for a growing group of consultant academics who are selling “how to get published” services. There are now many academics claiming to help others “beat the crowd” by ensuring their work gets noticed and cited. What better advertising technique than to tell your customers that they need to make sure you are in the 10% of articles that do get cited?
We are only human too
Finally, as academics we are only human too. On topics that are close to our heart or our academic identity, we are prone to the same reasoning flaws as we attribute to the (wo)man in the street on topics such as migration, vaccination, and climate change. We reason by anecdote and are happy to readily believe statements that serve our own academic agenda. Anything surrounding citation analysis seems to be particularly prone to anecdotal reasoning as it is often enmeshed into the wider critical academic agenda of proudly resisting neoliberalism and metrification.
In this context, believing in high levels of uncitedness – and thus the irrelevance of metrics – is a natural extension of the anecdotal argumentation against metrics that I often hear bandied about such as: “My best article isn’t my most highly-cited article, so all metrics are flawed” or “So-and-so published a really bad study, but it is on a sexy topic, so it is highly cited. This proves that citations can never be trusted”. To me this is the equivalent of “My grandma smoked a packet of cigarettes a day and lived to be 90”. Yes, the anecdote might well be true, but that doesn’t mean that it can be generalized to every individual article or every individual academic.
So, as academics, can we please do what we do best and actually read the relevant literature or conduct some actual research before proclaiming to know what we are talking about. Large-scale empirical studies show that most articles are in fact cited. Yes, there are always exceptions, but we would not question one of our core theories based on one outlier, would we? Ever heard of the saying: The exception proves the rule?
Conclusion
Academic publications should have different standards and requirements regarding truth and accuracy from other mediums, and should not be a playground for rumors and urban legends. Accurate, complete, and relevant references to reliable sources are the best tools in order to avoid such a scenario. Rekdal (2014a: 649).
In this white paper, I investigated a commonly circulated “truth” about citations, namely that most of our academic articles never get cited and thus have negligible impact. Three source articles for this myth provide estimates of uncitedness that run from about half of all publications (Hamilton, 1990) and three quarters of publications in Business & Management (Hamilton, 1991) to a whopping 90% (Meho, 2007). Over the past 35 years, these figures have been reiterated both in academic publications and – more recently – on social media, often joined by an explicit or implicit assessment that most of what we do as academics is worthless.
I launched a case for the defence of Science that took three avenues: situational (uncitedness depends on a wide range of factors), empirical (recent studies show uncitedness is very low), and fundamental (uncitedness doesn’t equate to a lack of worth). I showed – with consistent, abundant, and rigorous evidence – that there is absolutely no ground for concluding that uncitedness is a major problem in academic research. Hence, just like I demonstrated in 1995 for expatriate failure rates (see also: What's the story behind your first paper? ), we can label the existence of high levels of uncitedness a myth. However, by continuing to cite – in our own academic work – the dubious articles that formed the basis of the myth for high uncitedness and largely ignoring – or even misrepresenting – the articles that debunked this myth, we have transformed click-bait into “fact”.
Those intent on criticising Science might well say: “ok, I acknowledge most articles are cited, but these are low quality citations”. Or “yes, but aren’t some of these citations self-citations by the author? “Or even “yes, I acknowledge most articles are cited, but that doesn’t mean they are good articles”. These statements might all be true (or not), but they are different arguments that need separate investigations with questions such as “What are high- and low-quality citations?” “What are typical levels of self-citation” or “How do we define good articles?”.
If we want to criticise Science, there are many avenues that we can take. But the one avenue that should be blocked with a big red “wrong way” sign is the avenue that leads us to transform click-bait and doubtful evidence into academic facts through poor academic referencing practices. So, before citing any publication as evidence, we need to carefully evaluate whether it really supports the claim we are making. Following my 12 guidelines for good academic referencing (Harzing, 2002) may help here. As academics let’s at least try to be critical when evaluating evidence, assess it without prejudice, and report it accurately, precisely, and comprehensively. If we give up on these fundamental academic values because a piece of evidence suits our academic agenda, we are no better than conspiracy theorists. And if we do so, we are actually confirming their prejudices that experts are not to be believed. Do we really want to contribute to our own demise as academics?
References
Abulof, U. (2017). Introduction: Why we need Maslow in the twenty-first century. Society, 54: 508-509.
Amath, A., Ambacher, K., Leddy, J.J., Wood, T.J., & Ramnanam, C.J. (2017). Comparing alternative and traditional dissemination metrics in medical education. Medical Education in Review, 51(9): 935-941.
Antonovskiy, A. “Give money and don’t interfere”, or How Science Treats Its Public, Tomsk State University Journal of Philosophy, Sociology and Political Science, 74: 49-56.
Arnesen, K., Walters, S., Borup, J., & Barbour, M.K. (2020). Irrelevant, Overlooked, or Lost? Trends in 20 Years of Uncited and Low-cited K-12 Online Learning Articles. Online Learning, 24(2):187-206
Baldwin, P. (2023). The Digital Disseminators, Chapter 6 in: Athena Unbound, MIT Press
Barbosa, M.M.A.L., Cuenca, A.M.B., Oliveira, K. de, Junior, I.F., Alvarez, M.C.A., & Omae, L.Y. (2019). Most-cited public health articles of scientific journals from Brazil. Revista De Saude Publica, 53: 1-12
Baruch, Y., Homberg, F., & Alshaikhmubarak, A. (2022). Are half of the published papers in top-management-journals never cited? Refuting the myth and examining the reason for its creation. Studies in Higher Education, 47(6): 1134–1149.
Betancourt, N., Jochem, T. & Otner, S.M.G. (2023). Standing on the shoulders of giants: How star scientists influence their coauthors. Research Policy, 52(1): 104624.
Bin-Obaidellah, O, & Al-Fagih, AE (2019). Scientometric Indicators and Machine Learning-Based Models for Predicting Rising Stars in Academia,2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, 2019, pp. 1-7, doi: 10.1109/ICSCC.2019.8843686.
Das, A.K., & Dutta, B. (2022). Scrutinising uncitedness and few h-type indicators of selected Indian physics and astronomy journals. Annals of Library and Information Studies, 69(3): 13-27
Dias, L.C., Lev, B. & Anderson, J.B. (2023). Low cited articles in Operations research/Management science. Omega-International Journal of Management Science, 115(Feb): 102792
Doleys, T. (2021). “Is Anyone Listening?” Measuring Faculty Engagement With Published SoTL Scholarship in Political Science. Journal of Political Science Education, 17(4): 541-559.
Donovan, J.M. (2025). Disciplinary variation in scholarly impact from two article title elements. Journal of Librarianship and Information Science, 0(0). https://doi.org/10.1177/09610006241311576
Dorta-González, P, & Dorta-González, MI (2023). The funding effect on citation and social attention: the UN Sustainable Development Goals (SDGs) as a case study. Online Information Review, 47 (7):1358–1376
Frachtenberg, E. (2023). Citation analysis of computer systems papers. Peerj Computer Science, 9:e1389 https://doi.org/10.7717/peerj-cs.1389
Garfield, E (1991) To be an uncited scientist is no cause for shame. The Scientist 5(6): 12.
Garfield, E. (1998) I had a dream ... about uncitedness. Scientist, 12(14): 10
Gharbi, M Al (2018). Race and the race for the White House: On social research in the age of Trump. The American Sociologist, 49:496–519.
Green, T. (2019). Is open access affordable? Why current models do not work and why we need internet-era transformation of scholarly communications. Learned Publishing, 32(1): 13-25.
Hagiopol, C, & Leru, PM (2024). Scientific truth in a post-truth era: a review. Science & Education, https://doi.org/10.1007/s11191-024-00527-x
Hamilton, D.P. (1990). Publishing by - and for questionable - the numbers. Science, 250(4986): 1331-1332.
Hamilton, D.P. (1991). Research papers – who’s uncited now. Science, 251(4989): 25.
Hancock, C.B., & Price, H.E. (2020). Sources Cited in the Journal of Research in Music Education: 1953 to 2015. Journal of Research in Music Education, 68(2): 216-240.
Harzing, A.W. (1995). The persistent myth of high expatriate failure rates, International Journal of Human Resource Management, 6(May): 457-475.
Harzing, A.W. (2002). Are our referencing errors undermining our scholarship and credibility? The case of expatriate failure rates. Journal of Organizational Behavior, 23(1): 127-148.
Harzing, A.W. (2013) A preliminary test of Google Scholar as a source for citation data: A longitudinal study of Nobel Prize winners, Scientometrics, 93(3):1057-1075.
Harzing, A.W.; Alakangas, S.; Adams, D. (2014) hIa: An individual annual h-index to accommodate disciplinary and career length differences, Scientometrics, 99(3): 811-821.
Harzing, A.W.; Alakangas, S. (2016) Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison, Scientometrics, 106(2): 787-804.
Heinisch, R. (2018). Struggling to Address the 'Big and Burning' Questions: The Opportunities and Perils of (Austrian) Political Science Going Mainstream. Österreichische Zeitschrift für Politikwissenschaft, 47(3):71-79.
Hernández-Torrano, D (2020). Mapping global research on child well-being in school contexts: A bibliometric and network analysis (1978–2018). Child Indicators Research, 13: 864-883.
Herrera-Viedma, E., Pasi, G., Lopez-Herrera, A.G., & Porcel, C. (2006). Evaluating the information quality of Web sites: A methodology based on fuzzy computing with words. Journal of the American Society for Information Science and Technology, 57(4), 538-549
Hu, Z. & Wu, Y. (2014). Regularity in the time-dependent distribution of the percentage of never-cited papers: An empirical pilot study based on the six journals. Journal of Informetrics, 8(1): 36-146.
Hubbard, D.E., & Vaaler, A. (2021). An exploratory study of library science journal articles in syllabi. The Journal of Academic Librarianship, 47(1): 102261
Jackson, T. (2016). Balancing academic elitism and unmediated knowledge in cross-cultural management studies. International Journal of Cross Cultural Management, 16(3): 255-258.
Jones, R. T. (2015). Presidential Address: Truth and error in scientific publishing. Journal of the Southern African Institute of Mining and Metallurgy, 115(9): 799-816.
Katchanov, Y.L., Yulia, V.M. & Shmatko, N.A. (2023). Uncited papers in the structure of scientific communication. Journal of Informetrics, 17(2): 101391.
Lariviere, V., Gingras, Y., & Archambault, E. (2009). The Decline in the Concentration of Citations, 1900-2007. Journal of the American Society for Information Science and Technology, 60(4): 858-862.
Law, R., Lee, H.A., & Au, N. (2013). Which Journal Articles are Uncited? The Case of the Asia Pacific Journal of Tourism Research and the Journal of Travel and Tourism Marketing. Asia Pacific Journal of Tourism Research, 18(6): 661-684.
Lin, A., Cui, J. & Hu, Z. (2024). Why aren't published works cited? Exploring the influences of bibliographic characteristics on uncitedness phenomenon in social sciences and arts and humanities. Social Science Information Sur Les Sciences Sociales, 63(4): 520-537.
MacRoberts, M.H., & MacRoberts, B.R. (2010). Problems of Citation Analysis: A Study of Uncited and Seldom-Cited Influences. Journal of the American Society for Information Science and Technology, 61(1): 1-12.
Marinetto, M. (2018). Fast Food Research in the Era of Unplanned Obsolescence. Journal of Management Studies, 55(6): 1014-1020.
Meho, L.I., 2007. The rise and rise of citation analysis. Physics World, 20(1): 32.
Mizrahi, M. (2017). What's so bad about scientism?. Social Epistemology, 31(4): 351-367.
Nicolaisen, J. & Frandsen, T.F. (2019). Zero impact: a large-scale study of uncitedness. Scientometrics, 119(2): 1227-1254.
Orbay, K., Fernando, A.M., & Orbay, M. (2025). Short, But How Short? Analysis of Educational Research Titles. SAGE Open, 15(1). https://doi.org/10.1177/21582440251320538
Örtenblad, A., & Koris, R. (2025). Toward a sustainable academic publishing system at university business schools. Management in Education, https://doi.org/10.1177/08920206251366081
Pendlebury D.A .(1991). Science, Citation, and Funding, Letter to the editor. Science 251: 1410–1411.
Powell, M. (2016). Citation Classics in Social Policy Journals. Social Policy & Administration, 50(6): 648-672.
Ranasinghe, I. et al. (2015). Poorly Cited Articles in Peer-Reviewed Cardiovascular Journals from 1997 to 2007 Analysis of 5-Year Citation Rates. Circulation, 131(20): 1755-1762.
Reisman, S (2017). Teaching vs. Research--Optimizing Your Contribution for Society's Well-Being. Computer, 50(7): 96-98.
Rekdal, O.B. (2014a). Academic urban legends. Social Studies of Science, 44(4): 638-654.
Rekdal, O.B. (2014b). Academic Citation Practice: A Sinking Sheep?. Portal-Libraries And The Academy, 14(4): 567-585.
Remler, D. (2016). Are 90% of academic papers really never cited? Reviewing the literature on academic citations, LSE Impact of the Social Sciences Blog, 24 April 2014.
Sathish, C.; Harzing, A.W. (2025) Let’s form a Positive Academia Collective Transformation: Re-imagining our academic values and interactions, Management Learning, in press.
Samuels, D.J. (2011). The Modal Number of Citations to Political Science Articles Is Greater than Zero: Accounting for Citations in Articles and Books. Ps-Political Science & Politics, 44(4), 783-792.
Schwartz, CA (1997). The rise and fall of uncitedness. College & Research Libraries, 58(1): 19-29.
Sen, R., & Patel, P. (2012). Citation Rates of Award-Winning ASCE Papers. Journal of Professional Issues in Engineering Education And Practice, 138(2): 107-113.
Servaes, J. (2018). The impact of rankings. At the start of Volume 35. Telematics And Informatics, 35(1): 1-5.
Singh, G.G. (2022). We Have Sent Ourselves to Iceland (With Apologies to Iceland): Changing the Academy From Internally-Driven to Externally Partnered. Frontiers In Sustainable Cities, 4: 832506.
Spinks, N., Dursun, S., Sareen, J., Cranston, L., & Battams, N. (2020). Expediting research to practice for maximum impact. Journal of Military Veteran and Family Health, 6(1): 3-8.
Stec, P. (2023). Bibliometric Analysis of Top-Ranked European Law Schools' Research Outputs: an East-West Comparison. Krytyka Prawa-Niezalezne Studia Nad Prawem, 15(1): 15-33.
Stewart, I.D. (2019). Why should urban heat island researchers study history?. Urban Climate, 30 (Dec): 100484
Stoller, A. (2020). Dewey’s Naturalized Epistemology and the Possibility of Sustainable Knowledge. The Pluralist, 15(3): 82–96.
Thyer, B.A., Smith, T.E., Osteen, P. & Carter, T. (2019). The 100 Most Influential Contemporary Social Work Faculty as Assessed by the H-Index. Journal Of Social Service Research, 45(5): 696-700
Van Noorden, R. The science that's never been cited (2017) Nature 552 (7684): 162-164
Wallace, M.L., Lariviere, V. & Gingras, Y. (2009). Modeling a century of citation distributions. Journal of Informetrics, 3(4): 296-303.
Wassénius, E., Bunge, A.C., Scheuermann, M.K. et al. Creative destruction in academia: a time to reimagine practices in alignment with sustainability values. Sustainability Science, 18: 2769–2775
Wetterer J.K. (2006). Quotation error, citation copying, and ant extinctions in Madeira. Scientometrics 67(3): 351–372.
Whitescarver, T.D., et al. (2023). A bibliometric analysis of ophthalmology, medicine, and surgery journals: comparing citation metrics and international collaboration. The Open Ophthalmology Journal, 17: e187436412303200
Yang, S., Ma, F., Song, Y., & Qiu, J. (2010). A longitudinal analysis of citation distribution breadth for Chinese scholars. Scientometrics, 85(3): 755-765.
Copyright © 2025 Anne-Wil Harzing. All rights reserved. Page last modified on Thu 23 Oct 2025 11:54
Anne-Wil Harzing is Emerita Professor of International Management at Middlesex University, London. She is a Fellow of the Academy of International Business, a select group of distinguished AIB members who are recognized for their outstanding contributions to the scholarly development of the field of international business. In addition to her academic duties, she also maintains the Journal Quality List and is the driving force behind the popular Publish or Perish software program.