52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
Author guidelines for journals could help to promote transparency, openness, and reproducibility
Open access, open data, open source and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.DOI: http://dx.doi.org/10.7554/eLife.16800.001
Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary – the replication was either a success or a failure – and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
Psychological distance and abstraction both represent key variables of considerable interest to researchers across cognitive, social, and developmental psychology. Moreover, largely inspired by construal level theory, numerous experiments across multiple fields have now connected these 2 constructs, examining how psychological distance affects the level of abstraction at which people mentally represent the world around them. The time is clearly ripe for a quantitative synthesis to shed light on the relation between these constructs and investigate potential moderators. To this end, we conducted 2 meta-analyses of research examining the effects of psychological distance on abstraction and its downstream consequences. Across 106 papers containing a total of 267 experiments, our results showed a reliable and medium-sized effect of psychological distance on both level of abstraction in mental representation and the downstream consequences of abstraction. Importantly, these effects replicate across time, researchers, and settings. Our analyses also identified several key moderators, including the size of the difference in distance between 2 levels of a temporal distance manipulation and the dependent variable's capacity to tap processing of both abstract and concrete features (rather than only one or the other). We discuss theoretical and methodological implications, and highlight promising avenues for future research.
Psychological distance and abstraction both represent key variables of considerable interest to researchers across cognitive, social, and developmental psychology. Moreover, largely inspired by construal level theory, numerous experiments across multiple fields have now connected these two constructs, examining how psychological distance affects the level of abstraction at which people mentally represent the world around them. The time is clearly ripe for a quantitative synthesis to shed light on the relation between these constructs and investigate potential moderators. To this end, we conducted two metaanalyses of research examining the effects of psychological distance on abstraction and its downstream consequences. Across 106 papers containing a total of 267 experiments, our results showed a reliable and medium-sized effect of psychological distance on both level of abstraction in mental representation and the downstream consequences of abstraction. Importantly, these effects replicate across time, researchers, and settings. Our analyses also identified several key moderators, including the size of the difference in
Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
research question and the quality of the methodology, not whether the findings are positive, novel, and clean.More than 250 journals have adopted RRs since 2013 on the theorized promise of improving rigor and credibility. Initial evidence suggests that RRs are (1) effective at mitigating publication bias with a sharp increase in publishing negative results compared to the standard model 26,27 , and (2) cited as often or even more than other articles in the same journals 28 .However, there is no evidence about whether scholars perceive RRs to have higher, lower, or similar research quality compared with papers published in the standard model. The RR format could also have costs such as authors pursuing less interesting questions or conducting less novel or creative research 29,30 .We conducted an observational investigation of perceptions of the quality and importance of RRs compared to the standard model across a variety of outcome criteria. We recruited 353 researchers to each peer review a pair of papers, one from 29 RRs from psychology and neuroscience and one from 57 matched non-RR comparison papers.Comparison papers addressed similar topics, about half were by the same first or corresponding authors and about half were published in the same journal. RRs is a popular format for replication studies 3,31 , but replications are rare in the standard model so we excluded replication RRs. Researchers were assigned to papers according to their self-reported expertise based on the papers' keywords. Researchers self-reported that they were qualified to review the papers on average (N=353; RR M=3.74, SD=1.02; Comparison paper M=3.59, SD=1.07; Range 1 [not at all qualified] to 5 [substantially qualified]). Reviewers evaluated 19 outcome criteria including quality, rigor, novelty, creativity, and importance of the methodology and outcomes of the papers. In some RRs, authors submitted preliminary studies as initial evidence supporting the approach of the proposed last study that was peer reviewed before the findings were known. If Supplementary Table 10Article keywords included in the survey sample.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.