Increasing numbers of studies have explored human observers' ability to rapidly extract statistical descriptions from collections of similar items (e.g., the average size and orientation of a group of tilted Gabor patches). Determining whether these descriptions are generated by mechanisms that are independent from object-based sampling procedures requires that we investigate how internal noise, external noise, and sampling affect subjects' performance. Here we systematically manipulated the external variability of ensembles and used variance summation modeling to estimate both the internal noise and the number of samples that affected the representation of ensemble average size. The results suggest that humans sample many more than one or two items from an array when forming an estimate of the average size, and that the internal noise that affects ensemble processing is lower than the noise that affects the processing of single objects. These results are discussed in light of other recent modeling efforts and suggest that ensemble processing of average size relies on a mechanism that is distinct from segmenting individual items. This ensemble process may be more similar to texture processing.
Visual environments often contain multiple elements, some of which are similar to one another or spatially grouped together. In the current study we investigated how one can use perceptual groups in representing ensemble features of the groups. In experiment 1 we found that participants' performance improved when items were easily segmented by a grouping cue based on proximity, suggesting that spatial grouping facilitates extracting and remembering ensemble representations from visual arrays consisting of multiple elements. In experiment 2 we found that spatial grouping improved performance only when the grouped subsets were tested for the memory task, whereas it impaired performance when other subsets that were not grouped were tested, suggesting that the benefit from grouping may come from better extraction for storage, rather than later decision processes such as accessibility. Taken together, our results suggest that perceptual grouping of multiple items by proximity facilitates extraction of ensemble statistics from groups of items, enhancing visual memory of the ensembles in a visual array.
The present study investigated whether computation of mean object size was based on perceived or physical size. The Ebbinghaus illusion was used to make the perceived size of a circle different from its physical size. Four Ebbinghaus configurations were presented either simultaneously (Experiment 1) or sequentially (Experiment 2) to each visual field, and participants were instructed to attend only to the central circles of each configuration. Participants' judgments of mean central circle size were influenced by the Ebbinghaus illusion. In addition, the Ebbinghaus illusion influenced the coding of individual size rather than the averaging. These results suggest that perceived rather than physical size was used in computing the mean size.
Facial expression and eye gaze provide a shared signal about threats. While a fear expression with averted gaze clearly points to the source of threat, direct-gaze fear renders the source of threat ambiguous. Separable routes have been proposed to mediate these processes, with preferential attunement of the magnocellular (M) pathway to clear threat, and of the parvocellular (P) pathway to threat ambiguity. Here we investigated how observers’ trait anxiety modulates M- and P-pathway processing of clear and ambiguous threat cues. We scanned subjects (N = 108) widely ranging in trait anxiety while they viewed fearful or neutral faces with averted or directed gaze, with the luminance and color of face stimuli calibrated to selectively engage M- or P-pathways. Higher anxiety facilitated processing of clear threat projected to M-pathway, but impaired perception of ambiguous threat projected to P-pathway. Increased right amygdala reactivity was associated with higher anxiety for M-biased averted-gaze fear, while increased left amygdala reactivity was associated with higher anxiety for P-biased, direct-gaze fear. This lateralization was more pronounced with higher anxiety. Our findings suggest that trait anxiety differentially affects perception of clear (averted-gaze fear) and ambiguous (direct-gaze fear) facial threat cues via selective engagement of M and P pathways and lateralized amygdala reactivity.
We address the challenges of how to model human perceptual grouping in random dot arrays and how perceptual grouping affects human number estimation in these arrays. We introduce a modeling approach relying on a modified k-means clustering algorithm to formally describe human observers' grouping behavior. We found that a default grouping window size of approximately 4° of visual angle describes human grouping judgments across a range of random dot arrays (i.e., items within 4° are grouped together). This window size was highly consistent across observers and images, and was also stable across stimulus durations, suggesting that the k-means model captured a robust signature of perceptual grouping. Further, the k-means model outperformed other models (e.g., CODE) at describing human grouping behavior. Next, we found that the more the dots in a display are clustered together, the more human observers tend to underestimate the numerosity of the dots. We demonstrate that this effect is independent of density, and the modified k-means model can predict human observers' numerosity judgments and underestimation. Finally, we explored the robustness of the relationship between clustering and dot number underestimation and found that the effects of clustering remain, but are greatly reduced, when participants receive feedback on every trial. Together, this work suggests some promising avenues for formal models of human grouping behavior, and it highlights the importance of a 4° window of perceptual grouping. Lastly, it reveals a robust, somewhat plastic, relationship between perceptual grouping and number estimation.
We recently showed that visuomotor adaptation acquired under attentional distraction is better recalled under a similar level of distraction compared to no distraction. This paradoxical effect suggests that attentional state (e.g., divided or undivided) is encoded as an internal context during visuomotor learning and should be reinstated for successful recall (Song & Bédard, 2015). To investigate if there is a critical temporal window for encoding attentional state in visuomotor memory, we manipulated whether participants performed the secondary attention-demanding task concurrently in the early or late phase of visuomotor learning. Recall performance was enhanced when the attentional states between recall and the early phase of visuomotor learning were consistent. However, it reverted to untrained levels when tested under the attentional state of the late-phase learning. This suggests that attentional state is primarily encoded during the early phase of learning before motor errors decrease and reach an asymptote. Furthermore, we demonstrate that when divided and undivided attentional states were mixed during visuomotor adaptation, only divided attention was encoded as an internal cue for memory retrieval. Therefore, a single attentional state appears to be primarily integrated with visuomotor memory while motor error reduction is in progress during learning.
A simple and popular psychophysical model-usually described as overlapping Gaussian tuning curves arranged along an ordered internal scale-is capable of accurately describing both human and nonhuman behavioral performance and neural coding in magnitude estimation, production, and reproduction tasks for most psychological dimensions (e.g., time, space, number, or brightness). This model traditionally includes two parameters that determine how a physical stimulus is transformed into a psychological magnitude: (1) an exponent that describes the compression or expansion of the physical signal into the relevant psychological scale (β), and (2) an estimate of the amount of inherent variability (often called internal noise) in the Gaussian activations along the psychological scale (σ). To date, linear slopes on log-log plots have traditionally been used to estimate β, and a completely separate method of averaging coefficients of variance has been used to estimate σ. We provide a respectful, yet critical, review of these traditional methods, and offer a tutorial on a maximum-likelihood estimation (MLE) and a Bayesian estimation method for estimating both β and σ [PsiMLE(β,σ)], coupled with free software that researchers can use to implement it without a background in MLE or Bayesian statistics (R-PsiMLE). We demonstrate the validity, reliability, efficiency, and flexibility of this method through a series of simulations and behavioral experiments, and find the new method to be superior to the traditional methods in all respects.
Fearful faces convey threat cues whose meaning is contextualized by eye gaze: While averted gaze is congruent with facial fear (both signal avoidance), direct gaze (an approach signal) is incongruent with it. We have previously shown using fMRI that the amygdala is engaged more strongly by fear with averted gaze during brief exposures. However, the amygdala also responds more to fear with direct gaze during longer exposures. Here we examined previously unexplored brain oscillatory responses to characterize the neurodynamics and connectivity during brief (~250 ms) and longer (~883 ms) exposures of fearful faces with direct or averted eye gaze. We performed two experiments: one replicating the exposure time by gaze direction interaction in fMRI (N = 23), and another where we confirmed greater early phase locking to averted-gaze fear (congruent threat signal) with MEG (N = 60) in a network of face processing regions, regardless of exposure duration. Phase locking to direct-gaze fear (incongruent threat signal) then increased significantly for brief exposures at ~350 ms, and at ~700 ms for longer exposures. Our results characterize the stages of congruent and incongruent facial threat signal processing and show that stimulus exposure strongly affects the onset and duration of these stages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.