News & Press

Read the latest stories and announcements from scite. Writing a story about scite? Contact us at hi@scite.ai

Press Releases

On Joining Research Solutions

We’re excited to enter a new phase of our journey at scite by coming together with [Research Solutions](https://www.researchsolutions.com/). The motivation behind this decision was our desire to further our mission of introducing the next generation of citations and better serve our users, publishers, and the scholarly ecosystem for the long term.

Read more
Colabra and scite Partner on the Development of Citations Block

Brooklyn, NY — February 23, 2023 — scite, an award-winning tool that helps researchers discover and understand research findings more efficiently through Smart Citations, has partnered with Colabra, provider of a modern GxP-compliant Electronic Laboratory Notebook (ELN) and project management tool, to enhance the citation experience on the Colabra platform.

Read more
SPIE and scite Partner on the development of Smart Citations

scite has partnered with SPIE, the international society for optics and photonics, to enhance the research experience on the SPIE Digital Library.

Read more
A Global Database of Citation Context and Coverage: Our Coverage in Turkey

Learn how scite makes research more inclusive and comprehensive by including Turkish content in its database.

Read more
Ask a Question, Get Answers Directly from Research Article

Today, we're excited to introduce the capability for scite users to ask research questions in plain language and get answers directly from the full text of research articles.

Read more
scite Recruits Rogier van Erkel as Chief Revenue Officer

Rogier brings significant experience leading sales in academia and corporate markets.

Read more

Awards

istme
ISMTE People's Choice Award
alpsp
ALPSP Award for Innovation in Publishing
nsf
NSF Phase 1 SBIR Grant
nih
NIH/NIDA SBIR Fast-track grant
vesalius prize
Vesalius Prize Runner Up

Company News

Slide 1 of 6
future
nature
axios
science
statp
natureindex
future
nature
axios
science
statp
natureindex
future
nature
axios
science
statp
natureindex
future
nature
science
axios
stat-plus
nature-index

Organizations We Serve

uhksmucshlhkusttsmsbucbuclalawuniversity-of-gavle

Click here to request a demo or configure a trial for your organization.

Integrations & Partnerships

aibccccharleswortheifligrouphkustjacek-lewinsonlean-libraryonlinebilgirssamscholarcyspringernaturewritefulrupbmjkragerfmgwileyiopcupepmcmbcthiemesagefrontiersapsiucrarxivemerald

Let us know if you want to work together at hi@scite.ai

Research using scite

The social sciences, as a collection of scholarly fields, have quite different characteristics compared to STEM (science, technology, engineering, and medicine) fields. Still, researchers in the social sciences have been evaluated similarly to researchers in STEM for many years. However, many studies have been published regarding the need to evaluate social sciences differently. This chapter examines research evaluation studies published in social sciences and presents the advantages and disadvantages of current research evaluation topics including altmetrics, content-based citation analysis, peer-review, and scientific visualization

Despite continued attention, finding adequate criteria for distinguishing “good” from “bad”scholarly journals remains an elusive goal. In this essay, I propose a solution informed by thework of Imre Lakatos and his methodology of scientific research programmes (MSRP). I beginby reviewing several notable attempts at appraising journal quality – focusing primarily on theimpact factor and development of journal blacklists and whitelists. In doing so, I note theirlimitations and link their overarching goals to those found within the philosophy of science. Iargue that Lakatos’s MSRP and specifically his classifications of “progressive” and“degenerative” research programmes can be analogized and repurposed for the evaluation ofscholarly journals. I argue that this alternative framework resolves some of the limitationsdiscussed above and offers a more considered evaluation of journal quality – one that helpsaccount for the historical evolution of journal-level publication practices and attendantcontributions to the growth (or stunting) of scholarly knowledge. By doing so, the seemingproblem of journal demarcation is diminished. In the process I utilize two novel tools (themistake index and scite index) to further operationalize aspects of the MSRP.

This thesis presents a look into citation counts as a measure for scientific impact which in turn is used to determine the replication value (RV). first, by comparing citation sources (WoS, Crossref, Scopus and Scite) from which citation counts can be retrieved. Secondly, by removing contradicting citations from the citation count, and comparing this new citation count without contradicting citations with the original total citation count. In both cases, based on the citation count, rank order lists are formed which are compared with the use of two tests. First, Kendall’s tau is calculated to see how well the compared pairs of lists correlate. Second, the rank biased overlap (RBO) is calculated to see how well pairs of lists overlap. The RBO is different than Kendall’s tau because it is able to give more weight to citation counts at the top of the list emphasizing the importance of high ranked articles as opposed to low ranked articles. Both measures indicate a significant correlation and overlap between ranked lists originating from Scopus and Crossref and WoS, and a lower correlation and overlap between Scite and all other sources. Based on the difference between Scite and all other sources, Scite is not yet the best choice as a citation source for determining scientific impact. Both measures also indicate a strong correlation and overlap between the ranked list formed from the total citation counts and the ranked list formed from the total citation count minus the contradicting citations. Based on this high correlation and overlap, taking out contradicting citations is not needed when determining scientific impact.

With scite, researchers can use "smart citations" to see the context, the location, and the classifications of citation statements. Currently, over 800 million citation statements from scientific articles have been included (Citation Coverage). This tool is a work in progress, and its greatest coverage of the literature is in the field of biomedical research (scite 2021). Browser extensions and limited scite reports are free, however, increasing functionality is limited to subscribers. What role do citations play in scholarly communication? "scite combines deep learning with a network of experts to evaluate the veracity of scientific work" (M scite 2020). Citation statistics, e.g. hindex and impact factor, are often viewed as directional proportional to a manuscript's value and integrity. Background literature and the literature review provide valuable information for both the authors and readers of scientific papers. With proper background research, authors can save time and money by learning from past successes and failures as evidenced in previous research. Citations can help readers trust that authors have reached scientifically sound conclusions based on previously established research and evidence. Even if a scholarly article is retracted, it can still be cited. The reasons for citation statements in a retracted article may vary. scite's goal is to further clarify the citation statements and the role they play in all scholarly articles.

Abstract Citation indices are tools used by the academic community for research and research evaluation which aggregate scientific literature output and measure impact by collating citation counts. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. The usage of citations in research evaluation without consideration of context can be problematic, because a citation that presents contrasting evidence to a paper is treated the same as a citation that presents supporting evidence. To solve this problem, we have used machine learning, traditional document ingestion methods, and a network of researchers to develop a “smart citation index” called scite, which categorizes citations based on context. Scite shows how a citation was used by displaying the surrounding textual context from the citing paper and a classification from our deep learning model that indicates whether the statement provides supporting or contrasting evidence for a referenced work, or simply mentions it. Scite has been developed by analyzing over 25 million full-text scientific articles and currently has a database of more than 880 million classified citation statements. Here we describe how scite works and how it can be used to further research and research evaluation. Peer Review https://publons.com/publon/10.1162/qss_a_00146

Abstract Between its origin in the 1950s and its endorsement by a consensus conference in 1984, the diet–heart hypothesis was the object of intense controversy. Paul et al. (1963) is a highly-cited prospective cohort study that reported findings inconvenient for this hypothesis, reporting no association between diet and heart disease; however, many other findings were also reported. By citation context and network analysis of 343 citing papers, I show how Paul et al. was cited in the twenty years after its publication. Generally, different findings were cited by different communities focussing on different risk-factors; communities established by either research foci title terms or via cluster membership as established via modularity maximisation. The most frequently cited findings were the significant associations between heart disease; and serum cholesterol (n=85), blood pressure (n=57), and coffee consumption (n=54). The lack of association between diet and heart disease was cited in just 41 papers. Yet, no single empirical finding was referred to in more than 25% of the citing papers. This raises questions about the value of inferring impact from citation counts alone, and raises problems for studies using such counts to measure citation bias. Peer Review https://publons.com/publon/10.1162/qss_a_00154

Disagreement is essential to scientific progress but the extent of disagreement in science, its evolution over time, and the fields in which it happens remain poorly understood. Here we report the development of an approach based on cue phrases that can identify instances of disagreement in scientific articles. These instances are sentences in an article that cite other articles. Applying this approach to a collection of more than four million English-language articles published between 2000 and 2015 period, we determine the level of disagreement in five broad fields within the scientific literature (biomedical and health sciences; life and earth sciences; mathematics and computer science; physical sciences and engineering; and social sciences and humanities) and 817 meso-level fields. Overall, the level of disagreement is highest in the social sciences and humanities, and lowest in mathematics and computer science. However, there is considerable heterogeneity across the meso-level fields, revealing the importance of local disciplinary cultures and the epistemic characteristics of disagreement. Analysis at the level of individual articles reveals notable episodes of disagreement in science, and illustrates how methodological artifacts can confound analyses of scientific texts.

What Do You Mean?” was an undeniable bop of its era in which Justin Bieber explores the ambiguities of romantic communication. (I pinky promise this will soon make sense for scholarly communication librarians interested in artificial intelligence [AI].) When the single hit airwaves in 2015, there was a meta-debate over what Bieber meant to add to public discourse with lyrics like “What do you mean? Oh, oh, when you nod your head yes, but you wanna say no.” It is unlikely Bieber had consent culture in mind, but the failure of his songwriting team to take into account that some audiences might interpret it that way was ironic, considering the song is all about interpreting signals.

As part of the global mobilization to combat the present pandemic, almost 100 000 COVID-19-related papers have been published and nearly a thousand models of macromolecules encoded by SARS-CoV-2 have been deposited in the Protein Data Bank within less than a year. The avalanche of new structural data has given rise to multiple resources dedicated to assessing the correctness and quality of structural data and models. Here, an approach to evaluate the massive amounts of such data using the resource https://covid19.bioreproducibility.org is described, which offers a template that could be used in large-scale initiatives undertaken in response to future biomedical crises. Broader use of the described methodology could considerably curtail information noise and significantly improve the reproducibility of biomedical research.

Key points While the importance of citation context has long been recognized, simple citation counts remain as a crude measure of importance. Providing citation context should support the publication of careful science instead of headline‐grabbing and salami‐sliced non‐replicable studies. Machine learning has enabled the extraction of citation context for the first time, and made the classification of citation types at scale possible.

This study investigates whether negative citations in articles and comments posted on post-publication peer review platforms are both equally contributing to the correction of science. These 2 types of written evidence of disputes are compared by analyzing their occurrence in relation to articles that have already been retracted or corrected. We identified retracted or corrected articles in a corpus of 72,069 articles coming from the Engineering field, from 3 journals (Science, Tumor Biology, Cancer Research) and from 3 authors with many retractions to their credit (Sarkar, Schön, Voinnet). We used Scite to retrieve contradicting citations and PubPeer to retrieve the number of comments for each article, and then we considered them as traces left by scientists to contest published results. Our study shows that contradicting citations are very uncommon and that retracted or corrected articles are not more contradicted in scholarly articles than those that are neither retracted nor corrected but they do generate more comments on Pubpeer, presumably because of the possibility for contributors to remain anonymous. Moreover, post-publication peer review platforms, although external to the scientific publication process contribute more to the correction of science than negative citations. Consequently, post-publication peer review venues, and more specifically the comments found on it, although not contributing to the scientific literature, are a mechanism for correcting science. Lastly, we introduced the idea of strengthening the role of contradicting citations to rehabilitate the clear expression of judgment in scientific papers. Article Highlights  Negative citations are very uncommon.  Retracted or corrected papers are not more contradicted than others in scholarly articles.  Post-publication peer review platforms contribute more to the correction of science than in-text negative citations to papers.

A decade of in-text citation analysis based on natural language processing and machine learning techniques: an overview of empirical studies. Scientometrics.

Wikipedia is a widely used online reference work which cites hundreds of thousands of scientific articles across its entries. The quality of these citations has not been previously measured, and such measurements have a bearing on the reliability and quality of the scientific portions of this reference work. Using a novel technique, a massive database of qualitatively described citations, and machine learning algorithms, we analyzed 1,923,575 Wikipedia articles which cited a total of 824,298 scientific articles in our database, and found that most scientific articles cited by Wikipedia articles are uncited or untested by subsequent studies, and the remainder show a wide variability in contradicting or supporting evidence. Additionally, we analyzed 51,804,643 scientific articles from journals indexed in the Web of Science and found that similarly most were uncited or untested by subsequent studies, while

The current security context illustrated by the COVID-19 pandemic shows us that we have vulnerabilities, that there are threats and that there will be risks, including biological ones. In the field of BIO defense it is almost impossible to experiment at the general level. This can only be done on time in the laboratory, in vitro, in vivo and possibly in silico. The calculation methodology for the effects of possible attack with contagious biological warfare agents has certain assumptions and limitations. Considering that the population is homogeneous it results that the isolated groups, to which the infection it does not spread, will show an overestimation. Possible individual variations, particular diseases and asymptomatic cases are not taken into account so either an underestimation or an overestimation occurs. In the mathematical modeling of the epidemic diseases induced by biological attack with contagious agents can use the SEIRP model: Susceptible, Exposed and Infected, Infectious, Removed and Prophylaxis Efficacious Model. The study is important for medical operational planning.