Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly "embodied" in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.
Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with 5 sensory-motor attributes--color, shape, visual motion, sound, and manipulation--for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes' relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes.
The problem of how word meaning is processed in the brain has been a topic of intense investigation in cognitive neuroscience. While considerable correlational evidence exists for the involvement of sensory-motor systems in conceptual processing, it is still unclear whether they play a causal role. We investigated this issue by comparing the performance of patients with Parkinson’s disease (PD) with that of age-matched controls when processing action and abstract verbs. To examine the effects of task demands, we used tasks in which semantic demands were either implicit (lexical decision and priming) or explicit (semantic similarity judgment). In both tasks, PD patients’ performance was selectively impaired for action verbs (relative to controls), indicating that the motor system plays a more central role in the processing of action verbs than in the processing of abstract verbs. These results argue for a causal role of sensory-motor systems in semantic processing.
We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.
The ability to draw analogies requires 2 key cognitive processes, relational integration and resolution of interference. The present study aimed to identify the neural correlates of both component processes of analogical reasoning within a single, nonverbal analogy task using event-related functional magnetic resonance imaging. Participants verified whether a visual analogy was true by considering either 1 or 3 relational dimensions. On half of the trials, there was an additional need to resolve interference in order to make a correct judgment. Increase in the number of dimensions to integrate was associated with increased activation in the lateral prefrontal cortex as well as lateral frontal pole in both hemispheres. When there was a need to resolve interference during reasoning, activation increased in the lateral prefrontal cortex but not in the frontal pole. We identified regions in the middle and inferior frontal gyri which were exclusively sensitive to demands on each component process, in addition to a partial overlap between these neural correlates of each component process. These results indicate that analogical reasoning is mediated by the coordination of multiple regions of the prefrontal cortex, of which some are sensitive to demands on only one of these 2 component processes, whereas others are sensitive to both.
According to an influential view of conceptual representation, action concepts are understood through motoric simulations, involving motor networks of the brain. A stronger version of this embodied account suggests that even figurative uses of action words (e.g., grasping the concept) are understood through motoric simulations. We investigated these claims by assessing whether Parkinson's disease (PD), a disorder affecting the motor system, is associated with selective deficits in comprehending action-related sentences. Twenty PD patients and 21 age-matched controls performed a sentence comprehension task, where sentences belonged to one of four conditions: literal action, non-idiomatic metaphoric action, idiomatic action, and abstract. The same verbs (referring to hand/arm actions) were used in the three action-related conditions. Patients, but not controls, were slower to respond to literal and idiomatic action than to abstract sentences. These results indicate that sensory-motor systems play a functional role in semantic processing, including processing of figurative action language.
The nature of the representational code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. We assessed the extent to which different representational systems contribute to the instantiation of lexical concepts in high-level, heteromodal cortical areas previously associated with semantic cognition. We found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex. In most of these areas, we found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organization. These results were found independently for object and event concepts. Our findings indicate that concept representations in the heteromodal cortex are based, at least in part, on experiential information. They also reveal that, in most heteromodal areas, event concepts have more heterogeneous representations (i.e., they are more easily decodable) than object concepts and that other areas beyond the traditional “semantic hubs” contribute to semantic cognition, particularly the posterior cingulate gyrus and the precuneus.
The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this "general semantic network" (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence-divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.