Doing research inevitably involves making numerous decisions that can influence research outcomes in such a way that it leads to overconfidence in statistical conclusions. One proposed method to increase the interpretability of a research finding is preregistration, which involves documenting analytic choices on a public, third-party repository prior to any influence by data. To investigate whether, in psychology, preregistration lives up to that potential, we focused on all articles published in Psychological Science with a preregistered badge between February 2015 and November 2017, and assessed the adherence to their corresponding preregistration plans. We observed deviations from the plan in all studies, and, more importantly, in all but one study, at least one of these deviations was not fully disclosed. We discuss examples and possible explanations, and highlight good practices for preregistering research.
Preregistration is a method to increase research transparency by documenting research decisions on a public, third-party repository prior to any influence by data. It is becoming increasingly popular in all subfields of psychology and beyond. Adherence to the preregistration plan may not always be feasible and even is not necessarily desirable, but without disclosure of deviations, readers who do not carefully consult the preregistration plan might get the incorrect impression that the study was exactly conducted and reported as planned. In this paper, we have investigated adherence and disclosure of deviations for all articles published with the Preregistered badge in Psychological Science between February 2015 and November 2017 and shared our findings with the corresponding authors for feedback. Two out of 27 preregistered studies contained no deviations from the preregistration plan. In one study, all deviations were disclosed. Nine studies disclosed none of the deviations. We mainly observed (un)disclosed deviations from the plan regarding the reported sample size, exclusion criteria and statistical analysis. This closer look at preregistrations of the first generation reveals possible hurdles for reporting preregistered studies and provides input for future reporting guidelines. We discuss the results and possible explanations, and provide recommendations for preregistered research.
Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
To test scientific hypotheses in the social sciences the substantive hypothesis is formulated together with auxiliary hypotheses, so that predictions about observable quantities can be logically derived from their conjunction -the complete hypothesis.Auxiliary hypotheses are statistical and theoretical claims about the variables of interest. For instance, beliefs about how the observed data were generated, the cause
Study preregistration has become increasingly popular in psychology, but its effectiveness in restricting potentially biasing researcher degrees of freedom remains unclear. We used an extensive protocol to assess the strictness of preregistrations and the consistency between preregistration and publications of 300 preregistered psychology studies. We found that preregistrations often lack methodological details and that undisclosed deviations from preregistered plans are frequent. Combining the strictness and consistency results highlights that biases due to researcher degrees of freedom are prevalent and likely in many preregistered studies. More comprehensive registration templates typically yielded stricter and hence better preregistrations. We did not find that effectiveness of preregistrations differed over time or between original and replication studies. Furthermore, we found that operationalizations of variables were generally more effectively preregistered than other study parts. Inconsistencies between preregistrations and published studies were mainly encountered for data collection procedures, statistical models, and exclusion criteria. Our results indicate that, to unlock the full potential of preregistration, researchers in psychology should aim to write stricter preregistrations, adhere to these preregistrations more faithfully, and more transparently report any deviations from the preregistrations. This could be facilitated by training and education to improve preregistration skills, as well as the development of more comprehensive templates.
Just as teachers give students exams to assess their mastery of a subject, researchers submit their theories to empirical tests. And just as a high score on a test by itself is not sufficient to believe in the student’s mastery of a subject, researchers need severe tests to make reliable inferences from observations to theories. In this paper, we provide an explication of the concept of severity, and how it underlies three current methodological crises in psychology: the theory crisis, the measurement crisis, and the generalizability crisis. Our detailed account reinforces the importance of designing tests that can prove yourself wrong, and should assist empirical researchers in evaluating the severity of their own tests.
Started as a small journal club initiative at the University of Oxford, ReproducibiliTea is now an international network of journal clubs spread across more than 100 institutions in 25 different countries. As founders of the KU Leuven ReproducibiliTea
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.