The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline‐specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large‐scale, multi‐laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less‐biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best‐practices blueprints for future infancy research.
One of the most intriguing findings on language comprehension is that violations of syntactic predictions can affect event-related potentials as early as 120 ms, in the same time-window as early sensory processing. This effect, the so-called early left-anterior negativity (ELAN), has been argued to reflect word category access and initial syntactic structure building (Friederici, 2002). In two experiments, we used magnetoencephalography to investigate whether (a) rapid word category identification relies on overt category-marking closed-class morphemes and (b) whether violations of word category predictions affect modality-specific sensory responses. Participants read sentences containing violations of word category predictions. Unexpected items varied in whether or not their word category was marked by an overt function morpheme. In Experiment 1, the amplitude of the visual evoked M100 component was increased for unexpected items, but only when word category was overtly marked by a function morpheme. Dipole modeling localized the generator of this effect to the occipital cortex. Experiment 2 replicated the main results of Experiment 1 and eliminated two non-morphology-related explanations of the M100 contrast we observed between targets containing overt category-marking and targets that lacked such morphology. Our results show that during reading, syntactically relevant cues in the input can affect activity in occipital regions at around 125 ms, a finding that may shed new light on the remarkable rapidity of language processing.
Syntactic factors can rapidly affect behavioral and neural responses during language processing, however, the mechanisms that allow this rapid extraction of syntactically relevant information remain poorly understood. We address this issue using magnetoencephalography, and find that an unexpected word category (like The recently princess…) elicits enhanced activity in visual cortex as early as 120ms, as a function of the compatibility of a word's form with the form properties associated with a predicted word category. Since no sensitivity to linguistic factors has been previously reported for words in isolation at this stage of visual analysis, we propose that predictions about upcoming syntactic categories are translated into form-based estimates, which are made available to sensory cortices. This finding may be a key component to elucidating the mechanisms that allow the extreme rapidity and efficiency of language comprehension.
The experiments reported here investigated the development of a fundamental component of cognition: to recognize and generalize abstract relations. Infants were presented with simple rulegoverned patterned sequences of visual shapes (ABB, AAB, and ABA) that could be discriminated from differences in the position of the repeated element (late, early, or nonadjacent, respectively). Eight-month-olds were found to distinguish patterns on the basis of the repetition, but appeared insensitive to its position in the sequence; 11-month-olds distinguished patterns over the position of the repetition, but appeared insensitive to the nonadjacent repetition. These results suggest that abstract pattern detection may develop incrementally in a process of constructing complex relations from more primitive components.Detection and generalization of patterns is a fundamental, core component of cognition, central to object and face recognition (Biederman, 1987;Hummel & Biederman, 1992), categorization (Kruschke, 1992), inference (Tenenbaum & Griffiths, 2001), reasoning (Murphy, 2002), word segmentation (Swingley, 2005), language acquisition (Brown, 1973;Pinker, 1994), and other developmental achievements. A central question often posed by developmental researchers, therefore, concerns the ability of infants and children to learn patterns and structure. The environment contains an immeasurable variety of objects and events, and an infinite number of relations between them, most of which are not useful for the developing child. It is essential, then, to understand the types of patterns that children are and are not able to learn.Correspondence should be addressed to Scott P. Johnson, UCLA Psychology-Developmental, Box 951563, 1285 Franz Hall, Los Angeles, CA 90095-1563. E-mail: scott.johnson@ucla.edu. NIH Public Access Author ManuscriptInfancy. Author manuscript; available in PMC 2009 March 11. NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author ManuscriptOne common approach in investigations of early pattern perception is to examine infants' sensitivity to structured relations among stimulus features in visual or auditory input. Experiments on statistical learning, for example, have explored the extent to which infants detect and use distributional information in auditory or visual sequences to combine individual features into larger units. Typically in these experiments, infants are presented with a stream of input consisting of repeating multielement units with randomized order, but fixed internal structure. Saffran, Aslin, and Newport (1996) used this approach to investigate 8-month-old infants' word segmentation in a corpus of artificial speech. Noting that adjacent sounds in natural speech that are likely to cooccur are usually found within words, whereas low-probability sound pairs tend to span word boundaries, Saffran et al. asked whether this difference in probability of cooccurrence provides potential information for word boundaries. Infants' discrimination of high-and low-probability sound pairs was e...
Everyone agrees that infants possess general mechanisms for learning about the world, but the existence and operation of more specialized mechanisms is controversial. One mechanism-rule learning-has been proposed as potentially specific to speech, based on findings that 7-month-olds can learn abstract repetition rules from spoken syllables (e.g. ABB patterns: wo-fe-fe, ga-tu-tu…) but not from closely matched stimuli, such as tones. Subsequent work has shown that learning of abstract patterns is not simply specific to speech. However, we still lack a parsimonious explanation to tie together the diverse, messy, and occasionally contradictory findings in that literature. We took two routes to creating a new profile of rule learning: meta-analysis of 20 prior reports on infants' learning of abstract repetition rules (including 1,318 infants in 63 experiments total), and an experiment on learning of such rules from a natural, non-speech communicative signal. These complementary approaches revealed that infants were most likely to learn abstract patterns from meaningful stimuli. We argue that the ability to detect and generalize simple patterns supports learning across domains in infancy but chiefly when the signal is meaningfully relevant to infants' experience with sounds, objects, language, and people.
Psychological scientists have become increasingly concerned with issues related to methodology and replicability, and infancy researchers in particular face specific challenges related to replicability: For example, high-powered studies are difficult to conduct, testing conditions vary across labs, and different labs have access to different infant populations. Addressing these concerns, we report on a large-scale, multisite study aimed at (a) assessing the overall replicability of a single theoretically important phenomenon and (b) examining methodological, cultural, and developmental moderators. We focus on infants’ preference for infant-directed speech (IDS) over adult-directed speech (ADS). Stimuli of mothers speaking to their infants and to an adult in North American English were created using seminaturalistic laboratory-based audio recordings. Infants’ relative preference for IDS and ADS was assessed across 67 laboratories in North America, Europe, Australia, and Asia using the three common methods for measuring infants’ discrimination (head-turn preference, central fixation, and eye tracking). The overall meta-analytic effect size (Cohen’s d) was 0.35, 95% confidence interval = [0.29, 0.42], which was reliably above zero but smaller than the meta-analytic mean computed from previous literature (0.67). The IDS preference was significantly stronger in older children, in those children for whom the stimuli matched their native language and dialect, and in data from labs using the head-turn preference procedure. Together, these findings replicate the IDS preference but suggest that its magnitude is modulated by development, native-language experience, and testing procedure.
Please cite this article in press as: Srinivasan, M., Rabagliati, H., How concepts and conventions structure the lexicon: Cross-linguistic evidence from polysemy. Lingua (2015) AbstractWords often have multiple distinct but related senses, a phenomenon called polysemy. For instance, in English, words like chicken and lamb can label animals and their meats while words like glass and tin can label materials and artifacts derived from those materials. In this paper, we ask why words have some senses but not others, and thus what constrains the structure of polysemy. Previous work has pointed to two different sources of constraints. First, polysemy could reflect conceptual structure: word senses could be derived based on how ideas are associated in the mind. Second, polysemy could reflect a set of arbitrary, language-specific conventions: word senses could be difficult to derive and might have to be memorized and stored. We used a large-scale cross-linguistic survey to elucidate the relative contributions of concepts and conventions to the structure of polysemy. We explored whether 27 distinct patterns of polysemy found in English are also present in 14 other languages. Consistent with the idea that polysemy is constrained by conceptual structure, we found that almost all surveyed patterns of polysemy (e.g., animal for meat, material for artifact) were present across languages. However, consistent with the idea that polysemy reflects language-specific conventions, we also found variation across languages in how patterns are instantiated in specific senses (e.g., the word for glass material is used to label different glass artifacts across languages). We argue that these results are best explained by a ''conventions-constrained-by-concepts'' model, in which the different senses of words are learned conventions, but conceptual structure makes some types of relations between senses easier to grasp than others, such that the same patterns of polysemy evolve across languages. This opens a new view of lexical structure, in which polysemy is a linguistic adaptation that makes it easier for children to learn word meanings and build a lexicon.
Lay AbstractThe Weak Central Coherence hypothesis is one of the most important cognitive theories of ASD. It argues that individuals with ASD have a detail-focused cognitive style that makes it hard for them to integrate information into its broader context. In this study, we examined whether this prediction correctly explains how young children with ASD understand words in sentences. Many words have multiple meanings (e.g., the homophones 'bat' or 'bank'). The Weak Central Coherence hypothesis predicts a difficulty using context to guess which meaning is correct. In our study, we used eye tracking to see if there are differences in how 7-year-old ASD and TD children understand ambiguous words. Children heard sentences containing ambiguous words while they looked at pictures. The context provided by the sentence meant that the pictures either were or were not related to the appropriate meaning of the ambiguous word. We found that, in both groups, children gazed at the pictures much more when context meant that they were related. This suggests that both groups similarly use context to determine the meanings of ambiguous words, which goes against the predictions of Weak Central Coherence, and suggests that refinement of the theory is necessary. explained by co-occurring language impairments. Here we provide a strong test of these claims, using the visual world eye tracking paradigm to examine the online mechanisms by which children with autism resolve linguistic ambiguity. To address concerns about both language impairments and compensatory strategies, we used a sample whose verbal skills were strong and whose average age (7;6) was lower than previous work on lexical ambiguity resolution in ASD. Participants (40 with autism and 40 controls) heard sentences with ambiguous words in contexts that either strongly supported one reading or were consistent with both (John fed/saw the bat). We measured activation of the unintended meaning through implicit semantic priming of an associate (looks to a depicted baseball glove). Contrary to the predictions of weak central coherence, children with ASD, like controls, quickly used context to resolve ambiguity, selecting appropriate meanings within a second. We discuss how these results constrain the generality of weak central coherence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.