The nucleus accumbens (Nacb) receives inputs from hippocampus and amygdala but it is still unclear how these inputs are functionally organized and may interact. The interplay between these input pathways was examined using electrophysiological tools in the rat, in vivo, under halothane anesthesia. After fornix/fimbria stimulation (Fo/Fi, subicular projection fibers to the Nacb), mono-and polysynaptically driven single units were recorded in the medial shell/core regions of the Nacb and in the ventromedial caudate putamen. Monosynaptically driven neurons by basolateral amygdala (BLA) stimulation were found in the medial shell/core and in the ventrolateral shell/core regions. In the areas of convergence (medial shell/core), paired activation of BLA followed by that of Fo/Fi resulted in an enhancement of the Fo/Fi response, whereas stimulation in the reverse order, Fo/Fi followed by BLA, led to a depression of the BLA response. In addition to these patterns of interactions, the tetanization of the Fo/Fi to Nacb pathway caused a homosynaptic decremental (long-term) potentiation in the Nacb, accompanied by a heterosynaptic (long-term) depression of the nontetanized BLA to Nacb pathway. We postulate that the hippocampal inputs may close a "gate" for the amygdala inputs, whereas the gate is opened for the hippocampus inputs by previous amygdalar activity. These opposite effects on the Nacb neuronal populations should be taken into account when interpreting behavioral phenomena, particularly with respect to the contrasting effects of the amygdala and the hippocampus in locomotion and place learning.
The orbitofrontal cortex (OBFc) has been suggested to code the motivational value of environmental stimuli and to use this information for the flexible guidance of goal-directed behavior. To examine whether information regarding reward prediction is quantitatively represented in the rat OBFc, neural activity was recorded during an olfactory discrimination "go"/"no-go" task in which five different odor stimuli were predictive for various amounts of reward or an aversive reinforcer. Neural correlates related to both actual and expected reward magnitude were observed. Responses related to reward expectation occurred during the execution of the behavioral response toward the reward site and within a waiting period prior to reinforcement delivery. About one-half of these neurons demonstrated differential firing toward the different reward sizes. These data provide new and strong evidence that reward expectancy, regardless of reward magnitude, is coded by neurons of the rat OBFc, and are indicative for representation of quantitative information concerning expected reward. Moreover, neural correlates of reward expectancy appear to be distributed across both motor and nonmotor phases of the task.It has been noted for a long time that the magnitude of a primary reinforcer exerts a profound effect on the selection and speed of behavioral responses (Black 1968;Campbell and Seiden 1974;Brown and Bowman 1995;Boysen et al. 2001;Bohn et al. 2003). Likewise, in computational neuroscience, different algorithms for reinforcement learning (RL) consider reward magnitude an important parameter to be gauged and predicted during sensorimotor processing (Sutton and Barto 1981;Schultz et al. 1997). In one of these models, in which glutamate serves as a reinforcing signal guiding synaptic modifications necessary for adapting operant behavior, reward-related information is primarily processed by glutamatergic projection neurons of the orbitofrontal cortex (OBFc), basolateral amygdala, and related limbic areas
Drug safety alerts were generated in one third of orders and were frequently overridden. Duplicate order alerts more often resulted in order cancellation (20%) than did alerts for overdose (11%) or DDIs (2%). DDIs were most frequently overridden. Only a small number of DDIs caused these overrides. Studies on improvement of alert handling should focus on these frequently-overridden DDIs.
The pathways from the hippocampal formation to the nucleus accumbens and the prefrontal cortex are likely to play a role in several aspects of learning and memory. In the present study we addressed the question of how plastic changes in these structures may occur simultaneously. This question can be studied in an appropriate way in the hippocampal/fornix-fimbria to prefrontal cortex/nucleus accumbens system, since electrical stimulation of the fornix-fimbria fibre bundle evokes characteristic field potentials in the two target areas simultaneously. First, we examined the termination field in the nucleus accumbens (medial shell and core region with an extension into the ventro-medial caudate-putamen) and the prefrontal cortex (deeper layers of the ventral prelimbic and ventral infralimbic areas) by recording single unit activity evoked by stimulation of fornix-fimbria fibres in halothane anaesthetized rats. Second, we studied short-term plasticity, namely paired pulse facilitation, in these two areas upon stimulation of the fornix-fimbria fibres. In the nucleus accumbens, paired pulse facilitation was encountered for double pulse intervals between 25 and 500 ms, peaking around 100 ms. In the medial prefrontal cortex it was confined to intervals between 25 and 200 ms, with a peak around 75 ms. Third, we investigated whether LTP could be elicited simultaneously in the two target structures by a single tetanic stimulation (50 Hz, 2 s) of the fornix-fimbria fibres. LTP that was sustained for more than 90 min in the medial prefrontal cortex, reached levels of 130% of control values. In the nucleus accumbens, however, only a transient form of potentiation was found which lasted no more than 60 min. These data show that synaptic weights can be changed in several target structures of the hippocampal formation, simultaneously, in a distributed way.
To investigate the involvement of the hippocampal-accumbens system in goal-oriented displacement behaviors, hippocampal neuronal activity was recorded in rats learning and recalling new distributions of different volumes of liquid reward among the arms of a plus maze. Each arm had a reward box containing a water trough and identical visual cues that could be illuminated independently. As the water-restricted rat successively visited the respective boxes, it received 7, 5, and 3 drops of water, and then 1 drop, provided at 1-s intervals. (Reward distributions were reassigned daily and mid-session.) In the training phase, reward boxes were lit individually. In the recall phase, the lamps on all arms were lit and then turned off as the rat visited the boxes in order of descending value. Neuronal firing rates were analyzed for changes related to reward value or to shifts between learning and recall phases. The principal finding is that place responses remained unchanged after these manipulations and that these neurons showed no evidence of explicit coding of reward value. In addition, two other types of responses appeared while the rat was stationary at the reward boxes awaiting multiple rewards. These were observed primarily in neurons within the dentate gyrus, but also in CA1. Position-selective reward site responses were regular at 20-60 impulses per second, while position-independent discharges bursted irregularly at about 5 impulses per second. Such responses could explain controversial reports of reward dependence in hippocampal neurons. The higher incidence of the latter responses in the temporal ("ventral") hippocampus is consistent with the distinctive anatomical and functional properties of this subregion.
It has been proposed that the striatum plays a crucial role in learning to select appropriate actions, optimizing rewards according to the principles of 'Actor-Critic' models of trial-and-error learning. The ventral striatum (VS), as Critic, would employ a temporal difference (TD) learning algorithm to predict rewards and drive dopaminergic neurons. This study examined this model's adequacy for VS responses to multiple rewards in rats. The respective arms of a plus-maze provided rewards of varying magnitudes; multiple rewards were provided at 1-s intervals while the rat stood still. Neurons discharged phasically prior to each reward, during both initial approach and immobile waiting, demonstrating that this signal is predictive and not simply motor-related. In different neurons, responses could be greater for early, middle or late droplets in the sequence. Strikingly, this activity often reappeared after the final reward, as if in anticipation of yet another. In contrast, previous TD learning models show decremental reward-prediction profiles during reward consumption due to a temporal-order signal introduced to reproduce accurate timing in dopaminergic reward-prediction error signals. To resolve this inconsistency in a biologically plausible manner, we adapted the TD learning model such that input information is nonhomogeneously distributed among different neurons. By suppressing reward temporal-order signals and varying richness of spatial and visual input information, the model reproduced the experimental data. This validates the feasibility of a TD-learning architecture where different groups of neurons participate in solving the task based on varied input information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.