Centre-surround receptive field organization is a ubiquitous property in mammalian visual systems, presumably tailored for extracting image features that are differentially distributed over space. In visual motion, this is evident as antagonistic interactions between centre and surround regions of the receptive fields of many direction-selective neurons in visual cortex. In a series of psychophysical experiments we make the counterintuitive observation that increasing the size of a high-contrast moving pattern renders its direction of motion more difficult to perceive and reduces its effectiveness as an adaptation stimulus. We propose that this is a perceptual correlate of centre-surround antagonism, possibly within a population of neurons in the middle temporal visual area. The spatial antagonism of motion signals observed at high contrast gives way to spatial summation as contrast decreases. Evidently, integration of motion signals over space depends crucially on the visibility of those signals, thereby allowing the visual system to register motion information efficiently and adaptively.
When conflicting images are presented to the corresponding regions of the two eyes, only one image may be consciously perceived. In binocular rivalry (BR), two images alternate in phenomenal visibility; even a salient image is eventually suppressed by an image of low saliency. Recently, N. Tsuchiya and C. Koch (2005) reported a technique called continuous flash suppression (CFS), extending the suppression duration more than 10-fold. Here, we investigated the depth of this prolonged form of interocular suppression as well as conventional BR and flash suppression (FS) using a probe detection task. Compared to monocular viewing condition, CFS elevated detection thresholds more than 20-fold, whereas BR did so by 3-fold. In subsequent experiments, we dissected CFS into several components. By manipulating the number and timing of flashes with respect to the probe, we found that the stronger suppression in CFS is not due to summation between BR and FS but is caused by the summation of the suppression due to multiple flashes. Our results support the view that CFS is not a stronger version of BR but is due to the accumulated suppressive effects of multiple flashes.
When the visual system is faced with conflicting or ambiguous stimulus information, visual perception fluctuates over time. We found that perceptual alternations are slowed when inducing stimuli move within the visual field, constantly engaging fresh, unadapted neural tissue. During binocular rivalry, dominance durations were longer when rival figures moved compared to when they were stationary, yielding lower alternation rates. Rate was not reduced, however, when observers tracked the moving targets, keeping the images on approximately the same retinal area. Alternations were reliably triggered when rival targets passed through a local region of the visual field preadapted to one of the rival targets. During viewing of a kinetic globe whose direction of rotation was ambiguous, observers experienced fewer alternations in perceived direction when the globe moved around the visual field or when the globe's axis of rotation changed continuously. Evidently, local neural adaptation is a key ingredient in the instability of perception.
When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of "hearing visual rhythms."
Afterimage formation, historically attributed to retinal mechanisms, may also involve postretinal process. Consistent with this notion are results from experiments, reported here, investigating the interaction between binocular rivalry and negative afterimages (AIs). In Experiment 1, one eye was exposed to a grating never consciously experienced by the observer because this grating remained suppressed in rivalry throughout induction (the exclusively dominant stimulus was designed to preclude formation of an AI). As expected, the suppressed grating generated a vivid AI whose orientation could be accurately identified; not surprisingly, the strength of this AI varied with induction contrast. Experiment 2 revealed, however, that the strength of this AI produced during suppression was significantly weaker than the AI produced by that same stimulus when it was visible throughout the entire induction period, implying that some component of AI induction is susceptible to interocular suppression. In Experiment 3, AIs of dichoptic, orthogonally oriented gratings were induced in a way ensuring that one of the two gratings was exclusively dominant during the induction period. Dissimilar monocular AIs engaged in rivalry, as expected, but, surprisingly, the AI induced by the suppressed grating initially dominated. We offer two alternative accounts of this counterintuitive finding, both based on differential neural adaptation.
Visual perception, and by implication underlying neural events, can become unstable when optical information specifying objects is ambiguous. Here we report that one striking form of instability-perceived three-dimensional structure-from-motion (SFM)-can be stabilized when an otherwise ambiguous object appears within a context implying frictional interactions with another rotating object; violations of physical conditions specifying friction disrupt stabilization. Evidently, information about frictional interaction is embedded within neural mechanisms specifying SFM.
Temporal information promotes visual grouping of local image features into global spatial form. However, experiments demonstrating time-based grouping typically confound two potential sources of information: temporal synchrony (precise timing of changes) and temporal structure (pattern of changes over time). Here we show that observers prefer temporal structure for determining perceptual organization. That is, human vision groups elements that change according to the same global pattern, even if the changes themselves are not synchronous. This finding prompts an important, testable prediction concerning the neural mechanisms of binding: Patterns of neural spiking over time may be more important than absolute spike synchrony.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.