For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile largescale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high (R 2 ≈ .99). There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
Background and research question
The goal of this paper is to examine the impact of linguistic coverage of databases used by bibliometricians on the capacity to effectively benchmark the work of researchers in social sciences and humanities. We examine the strong link between bibliometrics and the Thomson Scientific's database and review the differences in the production and diffusion of knowledge in the social sciences and humanities (SSH) and the natural sciences and engineering (NSE). This leads to a re-examination of the debate on the coverage of these databases, more specifically in the SSH. The methods section explains how we have compared the coverage of Thomson Scientific databases in the NSE and SSH to the Ulrich extensive database of journals. Our results show that there is a 20 to 25% overrepresentation of English-language journals in Thomson Scientific's databases compared to the list of journals presented in Ulrich. This paper concludes that because of this bias, Thomson Scientific databases cannot be used in isolation to benchmark the output of countries in the SSH.
This paper examines the genesis of journal impact measures and how their evolution culminated in the journal impact factor (JIF) produced by the Institute for Scientific Information. The paper shows how the various building blocks of the dominant JIF (published in the Journal Citation Report -JCR) came into being. The paper argues that these building blocks were all constructed fairly arbitrarily or for different purposes than those that govern the contemporary use of the JIF. The results are a faulty method, widely open to manipulation by journal editors and misuse by uncritical parties. The discussion examines some solution offered to the bibliometrics and scientific communities considering the wide use of this indicator at present.
sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature.
IntroductionBibliometrics and other quantitative methods are being used increasingly in research evaluation because of the growing concern about accountability of public spending in science (King, 1987;Treasury Board of Canada Secretariat, 2001). While the validity and appropriateness of bibliometric methods are largely accepted in the natural sciences, the situation is more complex in the case of the social sciences and humanities.Bibliometricians who evaluate research output in the natural sciences can rely on a well-defined set of core journals that contains the most-cited research and is covered comprehensively by both disciplinary and interdisciplinary databases. The same cannot be said about the social sciences and humanities.Hicks (1999, 2004)
Despite a very large number of studies on the aging and obsolescence of scientific literature, no study has yet measured, over a very long time period, the changes in the rates at which scientific literature becomes obsolete. This article studies the evolution of the aging phenomenon and, in particular, how the age of cited literature has changed over more than 100 years of scientific activity. It shows that the average and median ages of cited literature have undergone several changes over the period. Specifically, both World War I and World War II had the effect of significantly increasing the age of the cited literature. The major finding of this article is that contrary to a widely held belief, the age of cited material has risen continuously since the mid-1960s. In other words, during that period, researchers were relying on an increasingly old body of literature. Our data suggest that this phenomenon is a direct response to the steady-state dynamics of modern science that followed its exponential growth; however, we also have observed that online preprint archives such as arXiv have had the opposite effect in some subfields.
While several authors have argued that conference proceedings are an important source of scientific knowledge, the extent of their importance has not been measured in a systematic manner. This article examines the scientific impact and aging of conference proceedings compared to those of scientific literature in general. It shows that the relative importance of proceedings is diminishing over time and currently represents only 1.7% of references made in the natural sciences and engineering, and 2.5% in the social sciences and humanities. Although the scientific impact of proceedings is losing ground to other types of scientific literature in nearly all fields, it has grown from 8% of the references in engineering papers in the early 1980s to its current 10%. Proceedings play a particularly important role in computer sciences, where they account for close to 20% of the references. This article also shows that not unexpectedly, proceedings age faster than cited scientific literature in general. The evidence thus shows that proceedings have a relatively limited scientific impact, on average representing only about 2% of total citations, that their relative importance is shrinking, and that they become obsolete faster than the scientific literature in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.