Psychologists in many fields face a dilemma. Whereas most researchers are aware that randomized experiments are considered the "gold standard" for causal inference, manipulation of the independent variable of interest will often be unfeasible, unethical, or simply impossible. One can hardly assign couples to stay married or get a divorce; nonetheless, one might be interested in the causal effect of divorce on well-being. One cannot randomly resettle individuals into different strata of society, but one might be concerned about the causal effects of social class on behavior. One cannot randomize children to different levels of adversity, yet one might care about the potential negative consequences of childhood adversity on health in adulthood. This article provides very general guidelines for researchers who are interested in any of the many research questions that require causal inferences to be made on the basis of observational data. Researchers from different areas of psychology have chosen different strategies to cope with the weaknesses of observational data. To circumvent the issue altogether, some researchers have implemented "surrogate interventions": If the real-life cause of interest cannot be manipulated, there might be a proxy that can be randomized in the lab. For example, an influential study on the effects of social class on prosocial behavior included an experimental manipulation of perceived social class. Participants were asked to compare themselves with either the top or the bottom of the "social ladder," so as to temporarily change their subjective 745629A MPXXX10.
Causal inference is a central goal of research. However, most psychologists refrain from explicitly addressing causal research questions and avoid drawing causal inference on the basis of nonexperimental evidence. We argue that this taboo against causal inference in nonexperimental psychology impairs study design and data analysis, holds back cumulative research, leads to a disconnect between original findings and how they are interpreted in subsequent work, and limits the relevance of nonexperimental psychology for policymaking. At the same time, the taboo does not prevent researchers from interpreting findings as causal effects—the inference is simply made implicitly, and assumptions remain unarticulated. Thus, we recommend that nonexperimental psychologists begin to talk openly about causal assumptions and causal effects. Only then can researchers take advantage of recent methodological advances in causal reasoning and analysis and develop a solid understanding of the underlying causal mechanisms that can inform future research, theory, and policymakers.
Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Secondary data analysis, or the analysis of preexisting data, provides a powerful tool for the resourceful psychological scientist. Never has this been more true than now, when technological advances enable both sharing data across labs and continents and mining large sources of preexisting data. However, secondary data analysis is easily overlooked as a key domain for developing new open-science practices or improving analytic methods for robust data analysis. In this article, we provide researchers with the knowledge necessary to incorporate secondary data analysis into their methodological toolbox. We explain that secondary data analysis can be used for either exploratory or confirmatory work, and can be either correlational or experimental, and we highlight the advantages and disadvantages of this type of research. We describe how transparency-enhancing practices can improve and alter interpretations of results from secondary data analysis and discuss approaches that can be used to improve the robustness of reported results. We close by suggesting ways in which scientific subfields and institutions could address and improve the use of secondary data analysis.
This study examined the long-standing question of whether a person's position among siblings has a lasting impact on that person's life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (betweenfamily design). In our analyses, we confirmed the expected birthorder effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain. This question has fascinated both the scientific community and the general public for >100 y. In 1874, Francis Galton-the youngest of nine siblings-analyzed a sample of English scientists to find that firstborns were overrepresented (1). He suspected that eldest sons enjoy special treatment by their parents, allowing them to thrive intellectually. Half a century later, Alfred Adler, the second of six children, extended the psychology of birth order to personality traits (2). From his point of view, firstborns were privileged, but also burdened by feelings of excessive responsibility and a fear of dethronement and were thus prone to score high on neuroticism. Conversely, he expected later-borns, overindulged by their parents, to lack social empathy.Since then, empirical research on the relationship between birth order and intelligence has convincingly documented that performances in psychometric intelligence tests decline slightly from firstborns to later-borns (3), an effect that has been shown repeatedly (4-6) and its underlying causes investigated in depth (7, 8) to date. By contrast, the search for birth-order effects on personality has resulted in a vast body of inconsistent findings, as documented by reviews in the 1970s and 1980s (9, 10).Nearly 70 y after Adler's observations, Frank Sulloway revitalized the ...
The idea that birth-order position has a lasting impact on personality has been discussed for the past 100 years. Recent large-scale studies have indicated that birth-order effects on the Big Five personality traits are negligible. In the current study, we examined a variety of more narrow personality traits in a large representative sample ( n = 6,500-10,500 in between-family analyses; n = 900-1,200 in within-family analyses). We used specification-curve analysis to assess evidence for birth-order effects across a range of models implementing defensible yet arbitrary analytical decisions (e.g., whether to control for age effects or to exclude participants on the basis of sibling spacing). Although specification-curve analysis clearly confirmed the previously reported birth-order effect on intellect, we found no meaningful effects on life satisfaction, locus of control, interpersonal trust, reciprocity, risk taking, patience, impulsivity, or political orientation. The lack of meaningful birth-order effects on self-reports of personality was not limited to broad traits but also held for more narrowly defined characteristics.
Replication, an important, uncommon, and misunderstood practice, is making a comeback in psychology. Achieving replicability is a necessary but not sufficient condition for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understanding to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understanding and observed surprising failures to replicate many published findings. Replication efforts also highlighted sociocultural challenges, such as disincentives to conduct replications, framing of replication as personal attack rather than healthy scientific practice, and headwinds for replication contributing to self-correction. Nevertheless, innovation in doing and understanding replication, and its cousins, reproducibility and robustness, have positioned psychology to improve research practices and accelerate progress.
In psychological science, researchers often pay particular attention to the distinction between within- and between-person relationships in longitudinal data analysis. Here, we aim to clarify the relationship between the within- and between-person distinction and causal inference, and show that the distinction is informative but does not play a decisive role for causal inference. Our main points are threefold. First, within-person data are not necessary for causal inference; for example, between-person experiments can inform us about (average) causal effects. Second, within-person data are not sufficient for causal inference; for example, time-varying confounders can lead to spurious within-person associations. Finally, despite not being sufficient, within-person data can be tremendously helpful for causal inference. We provide pointers to help readers navigate the more technical literature on longitudinal models, and conclude with a call for more conceptual clarity: Instead of letting statistical models dictate which substantive questions we ask, we should start with well-defined theoretical estimands which in turn determine both study design and data analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.