2020
DOI: 10.1177/2515245920958687
|View full text |Cite
|
Sign up to set email alerts
|

Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

Abstract: Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 71 publications
0
28
0
Order By: Relevance
“…Open Science Collaboration (2015) replicated 100 findings from 2008 issues of three psychology journals and observed 36% achieved significance in the same direction with effect sizes 49% as large as the original studies. "Multi-site replications" include the series titled "Many Labs" (Ebersole et al, , 2020Klein et al, 2014Klein et al, , 2018Klein et al, , 2019, registered replication reports primarily from the journal Advances in Methods and Practices in Psychological Science (Alogna et al, 2014;Bouwmeester et al, 2017;Cheung et al, 2016;Colling et al, 2020;Eerland et al, 2016;Hagger et al, 2016;McCarthy, Skowronski, et al, 2018;O'Donnell et al, 2018;Verschuere et al, 2018;Wagenmakers et al, 2016), papers from the Collaborative Replications and Education Project (Ghelfi et al, 2020;Leighton et al, 2018;Wagge et al, 2018), and other similar efforts (Dang et al, 2021;ManyBabies Consortium, 2020;Mccarthy et al, 2020;McCarthy, Hartnett, et al, 2018;Moran et al, 2020;Schweinsberg et al, 2016). Collectively (n = 77), 56% of multisite replications reported statistically significant evidence in the same direction with effect sizes 53% as large as the original studies ( Figure 1).…”
Section: The State Of Replicability Of Psychological Sciencementioning
confidence: 99%
See 4 more Smart Citations
“…Open Science Collaboration (2015) replicated 100 findings from 2008 issues of three psychology journals and observed 36% achieved significance in the same direction with effect sizes 49% as large as the original studies. "Multi-site replications" include the series titled "Many Labs" (Ebersole et al, , 2020Klein et al, 2014Klein et al, , 2018Klein et al, , 2019, registered replication reports primarily from the journal Advances in Methods and Practices in Psychological Science (Alogna et al, 2014;Bouwmeester et al, 2017;Cheung et al, 2016;Colling et al, 2020;Eerland et al, 2016;Hagger et al, 2016;McCarthy, Skowronski, et al, 2018;O'Donnell et al, 2018;Verschuere et al, 2018;Wagenmakers et al, 2016), papers from the Collaborative Replications and Education Project (Ghelfi et al, 2020;Leighton et al, 2018;Wagge et al, 2018), and other similar efforts (Dang et al, 2021;ManyBabies Consortium, 2020;Mccarthy et al, 2020;McCarthy, Hartnett, et al, 2018;Moran et al, 2020;Schweinsberg et al, 2016). Collectively (n = 77), 56% of multisite replications reported statistically significant evidence in the same direction with effect sizes 53% as large as the original studies ( Figure 1).…”
Section: The State Of Replicability Of Psychological Sciencementioning
confidence: 99%
“…Speculative appeals to context sensitivity are common in response to failures to replicate (Cesario, 2014;Crisp et al, 2014;Dijksterhuis, 2018;Ferguson et al, 2014;Gilbert et al, 2016;Schnall, 2014;Schwarz & Strack, 2014;Shih & Pittinsky, 2014), but empirical demonstrations of a failure to replicate that was surmised to be due to failure to address context sensitivity then being "restored" in follow-up tests addressing the presumed sensitivity have yet to be observed. Further, there are multiple examples of presumed context sensitivity failing to occur or account for replication failures when examined directly (Ebersole, Atherton, et al, 2016;Ebersole et al, 2020;Klein et al, 2014Klein et al, , 2018. Heterogeneity is sometimes observed in replication studies, but it is usually modest and insufficient to make a replicable phenomenon appear or disappear based on factors that would not have been anticipated in advance of conducting the studies (Baribault et al, 2018;Klein et al, 2014Klein et al, , 2018Olsson-Collentine et al, 2020).…”
Section: Theorymentioning
confidence: 99%
See 3 more Smart Citations