Visual backward masking not only is an empirically rich and theoretically interesting phenomenon but also has found increasing application as a powerful methodological tool in studies of visual information processing and as a useful instrument for investigating visual function in a variety of specific subject populations. Since the dual-channel, sustained-transient approach to visual masking was introduced about two decades ago, several new models of backward masking and metacontrast have been proposed as alternative approaches to visual masking. In this article, we outline, review, and evaluate three such approaches: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics (Ogmen, 1993), the perceptual retouch theory (Bachmann, 1984(Bachmann, , 1994, and the boundary contour system (Francis, 1997;Grossberg & Mingolla, 1985b). Recent psychophysical and electrophysiological findings relevant to backward masking are reviewed and, whenever possible, are related to the aforementioned models. Besides noting the positive aspects of these models, we also list their problems and suggest changes that may improve them and experiments that can empirically test them.Visual masking occurs whenever the visibility of one stimulus, called the target, is reduced by the presence of another stimulus, designated as the mask. Visual masking has been, and continues to be, a powerful psychophysical tool for investigating the steady-state properties of spatial-processing mechanisms
NATURE | VOL 396 | 3 DECEMBER 1998 | www.nature.com that of the strobed segment (d s ) remains constant. The latency-difference hypothesis therefore predicts that the observed spatial lead of the moving central segment should increase.To test this prediction, we measured the spatial lead of the moving central segment as a function of the detectability of the central segment while keeping the detectability of the strobed segments constant. Here we use detectability to refer to the number of log units of luminance (Lu) above the detection threshold; detectability of the strobed segments was 0.3 Lu for subjects S.S.P. and G.P., and 0.5 Lu for T.L.N. The temporal lead of the moving central segment averaged across subjects increases systematically from 20 to 70 ms when its detectability increases by 1.0 Lu (Fig. 1b).Increasing the luminance of the strobed segments while keeping that of the moving central segment constant should decrease d s , while d m remains constant. The latencydifference hypothesis predicts that the observed spatial lead of the moving central segment should decrease and, if the luminance of the strobed segments is high enough, the moving central segment should be perceived to lag behind spatially. We tested this prediction by measuring spatial lead as a function of the detectability of the strobed segments, while keeping the detectability of the moving central segment constant (1.5 Lu above the detection threshold for subjects G.P. and T.L.N., and 0.8 Lu for S.S.P.). The observed temporal lead of the moving central segment averaged across subjects decreases systematically from 80 to ǁ30 ms as the detectability of the strobed segments increases by 1.5 to 2.0 Lu (Fig. 1c).These results support predictions of the latency-difference hypothesis and show that the motion-extrapolation mechanism does not compensate for stimulus-dependent variations in latency. Indeed, theoretical calculations show that the putative motionextrapolation mechanism must be undercompensating by at least 120 ms to account for the data in Fig. 1. But a motion-extrapolation mechanism that does not adequately compensate for variations in visual latency would not appreciably improve the accuracy of real-time visually guided behaviour.
How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing.
No abstract
In human vision, the optics of the eye map neighboring points of the environment onto neighboring photoreceptors in the retina. This retinotopic encoding principle is preserved in the early visual areas. Under normal viewing conditions, due to the motion of objects and to eye movements, the retinotopic representation of the environment undergoes fast and drastic shifts. Yet, perceptually our environment appears stable suggesting the existence of non-retinotopic representations in addition to the well-known retinotopic ones. Here, we present a simple psychophysical test to determine whether a given visual process is accomplished in retino- or non-retinotopic coordinates. As examples, we show that visual search and motion perception can occur within a non-retinotopic frame of reference. These findings suggest that more mechanisms than previously thought operate non-retinotopically. Whether this is true for a given visual process can easily be found out with our “litmus test.”
A metacontrast mask suppresses the visibility of, without influencing the reaction time (RT) to, the target. We investigated whether this dissociation results from a sensori-motor pathway immune to masking effects or from the characteristics of stimulus timing in mutually inhibitory sustained and transient channels. For target visibility, para- and metacontrast yielded the usual U-shaped functions. Peak paracontrast occurred at stimulus onset asynchronies (SOAs) of -150 to -100 ms. RTs were relatively low for metacontrast and did not show a systematic change as a function of SOA. The RT contribution from contour-masking was greatest at an SOA of -150 ms (paracontrast) and declined to near zero in the metacontrast regime. The dissociation between visibility and RT seen in metacontrast did not occur in paracontrast, rejecting the theory that RTs are elicited by a single sensori-motor pathway immune to masking. The dependence of the dissociation on stimulus timing can be explained by RECOD, a dual-pathway model wherein fast and slow activities interact.
Recent psychophysical studies have been interpreted to indicate that the perception of motion temporally either lags or is synchronous with the perception of color. These results appear to be at odds with neurophysiological data, which show that the average response-onset latency is shorter in the cortical areas responsible for motion (e.g., MT and MST) than for color processing (e.g., V4). The purpose of this study was to compare the perceptual asynchrony between motion and color on two psychophysical tasks. In the color correspondence task, observers indicated the predominant color of an 18 degrees x 18 degrees field of colored dots when they moved in a specific direction. On each trial, the dots periodically changed color from red to green and moved cyclically at 15, 30 or 60 deg/s in two directions separated by 180 degrees, 135 degrees, 90 degrees or 45 degrees. In the temporal order judgment task, observers indicated whether a change in color occurred before or after a change in motion, within a single cycle of the moving-dot stimulus. In the color correspondence task, we found that the perceptual asynchrony between color and motion depends on the difference in directions within the motion cycle, but does not depend on the dot velocity. In the temporal order judgment task, the perceptual asynchrony is substantially shorter than for the color correspondence task, and does not depend on the change in motion direction or the dot velocity. These findings suggest that it is inappropriate to interpret previous psychophysical results as evidence that motion perception generally lags color perception. We discuss our data in the context of a "two-stage sustained-transient" functional model for the processing of various perceptual attributes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.