Infants (from Latin infans, speechless) are human beings who cannot speak. It took most of us the whole first year of our lives to overcome this infancy and to produce our first few meaningful words, but we were not idle as infants. We worked, rather independently, on two basic ingredients of word production. On the one hand, we established our primary notions of agency, interactancy, the temporal and causal structures of events, object permanence and location. This provided us with a matrix for the creation of our first lexical concepts, concepts flagged by way of a verbal label. Initially, these word labels were exclusively auditory patterns, picked up from the environment. On the other hand, we created a repertoire of babbles, a set of syllabic articulatory gestures. These motor patterns normally spring up around the seventh month. The child carefully attends to their acoustic manifestations, leading to elaborate exercises in the repetition and concatenation of these syllabic patterns. In addition, these audiomotor patterns start resonating with real speech input, becoming more and more tuned to the mother tongue (De Boysson-Bardies & Vihman 1991;Elbers 1982). These exercises provided us with a protosyllabary, a core repository of speech motor patterns, which were, however, completely meaningless.Real word production begins when the child starts connecting some particular babble (or a modification thereof ) to some particular lexical concept. The privileged babble auditorily resembles the word label that the child has acquired perceptually. Hence word production emerges from a coupling of two initially independent systems, a conceptual system and an articulatory motor system. This duality is never lost in the further maturation of our word production system. Between the ages of 1;6 and 2;6 the explosive growth of the lexicon soon overtaxes the protosyllabary. It is increasingly hard to keep all the relevant wholeword gestures apart. The child conquers this strain on the system by dismantling the word gestures through a process of phonemization; words become generatively represented as concatenations of phonological segments (Elbers & Wijnen 1992;C. Levelt 1994). As a consequence, phonetic encoding of words becomes supported by a system of phono-BEHAVIORAL AND BRAIN SCIENCES (1999) Abstract: Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ...
According to certain theories of language production, lexical access to a content word consists of two independent and serially ordered stages. In the fust, semantically driven stage, so-called lemmas are retrieved, i.e., lexical items that are specified with respect to syntactic and semantic properties, but not with respect to phonological characteristics. In the second stage, the corresponding wordforms, the so-called lexemes, are retrieved. This implies that the access to a content word involves an early stage of exclusively semantic activation and a later stage of exclusively phonological activation. This seriality assumption was tested experimentally, using a picture-word interference paradigm in which the interfering words were presented auditorily. The results show an interference effect of semantically related words on picture naming latencies at an early SOA (-150 ms), and a facilitatory effect of phonologically related words at later SOAs (0 ms, + 150 ms). On the basis of these results it can be concluded that there is indeed a stage of lexical access to a content word where only its meaning is activated, followed by a stage where only its form is activated. These findings can be seen as empirical support for a two-stage mode1 of lexical access, or, alternatively, as putting constraints on the parameters in a network mode1 of lexical access, such as the model proposed by Dell and Reich. o 1990 Academic PKSS, IIIC.One of the most influential models of language production has been proposed by Garrett (e.g., 1976Garrett (e.g., , 1980Garrett (e.g., , 1988. According to this model, the formulation of a sentence involves a sequence of processes generating different levels of representation.On the basis of a preverbal representation of what the speaker wants to express the functional level representation is generated. It encodes the meanings of the lexical items and the grammatical relationships between them. Based on the functional level representation, the positional level representation is constructed, which encodes the phonological forms of the words and their order in the surface structure of the sentence.We thank Ger Dessejer and Hans Franssen, who ran the experiments with admirable patience and competence, Kay Bock, Gary Dell, and two anonymous reviewers for helpful comments on an earlier version of the manuscript. Send requests for reprints to Herbert Schriefers, Max-Planck-Institute for Psycholinguistics,
This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n = 82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
Lexical access in object naming involves the activation of a set oflexical candidates, the selection of the appropriate (or target) item, and the phonological encoding of that item. Two views of lexical access in naming are compared. From one view, the 2-stage theory, phonological activation follows selection of the target item and is restricted to that item. From the other view, which is most explicit in activation-spreading theories, all activated lexical candidates are phonologically activated to some extent. A series of experiments is reported in which subjects performed acoustic lexical decision during object naming at different stimulus-onset asynchronies. The experiments show semantic activation of lexical candidates and phonological activation of the target item, but no phonological activation of other semantically activated items. This supports the 2-stage view. Moreover, a mathematical model embodying the 2-stage view is fully compatible with the lexical decision data obtained at different stimulus-onset asynchronies.One of a speaker's core skills is to lexicalize the concepts intended for expression. Lexicalization proceeds at a rate of two to three words per second in normal spontaneous speech, but doubling this rate is possible and not exceptional. The skill of lexicalizing a content word involves two components. The first one is to select the appropriate lexical item from among some tens of thousands of alternatives in the mental lexicon. The second one is to phonologically encode the selected item, that is, to retrieve its sound form, to create a phonological representation for the item in its context, and to prepare its articulatory program. An extensive review of the literature on lexicalization can be found in Levelt (1989). This article addresses only one aspect of lexicalization, namely its time course. In particular, we examine whether the selection of an item and its phonological encoding can be considered to occur in two successive, nonoverlapping stages.We acknowledge the invaluable contributions of John Nagengast and Johan Weustink, who programmed the computer-based experiments; ofGer Desserjer and Hans Fransen, who ran the experiments and assisted in data analysis; and of lnge Tarim, who provided graphical assistance. We also acknowledge Gary Dell's and Picnic Zwitserlood's detailed comments on an earlier version of this article, as well as the thorough comments of an anonymous reviewer.
Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme. 8
A series of experiments was carried out investigating the time course of phonological encoding in language production, i.e., the question of whether all parts of the phonological form of a word are created in parallel, or whether they are created in a specific order. A speech production task was used in which the subjects in each test trial had to say one out of three or five response words as quickly as possible. In one condition, information was provided about part of the forms of the words to be uttered, in another condition this was not the case. The production of disyllabic words was speeded by information about their first syllable, but not by information about their second syllable. Experiments using trisyllabic words showed that a facilitatory effect could be obtained from information about the second syllable of the words, provided that the first syllable was also known. These findings suggest that the syllables of a word must be encoded strictly sequentially, according to their order in the word. o WJ Academic FWS, I~CIIn most theories of language production the meanings and sound forms of content words are represented as separate lexical units (see, for instance,
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
Four experiments investigated the span of advance planning for phrases and short sentences. Dutch subjects were presented with pairs of objects, which they named using noun-phrase conjunctions (e.g., the translation equivalent of ''the arrow and the bag'') or sentences (''the arrow is next to the bag''). Each display was accompanied by an auditory distractor, which was related in form or meaning to the first or second noun of the utterance or unrelated to both. For sentences and phrases, the mean speech onset time was longer when the distractor was semantically related to the first or second noun and shorter when it was phonologically related to the first noun than when it was unrelated. No phonological facilitation was found for the second noun. This suggests that before utterance onset both target lemmas and the first target form were selected. ᭧
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.