In this paper, we propose a novel model for the computational color constancy, inspired by the amazing ability of the human vision system (HVS) to perceive the color of objects largely constant as the light source color changes. The proposed model imitates the color processing mechanisms in the specific level of the retina, the first stage of the HVS, from the adaptation emerging in the layers of cone photoreceptors and horizontal cells (HCs) to the color-opponent mechanism and disinhibition effect of the non-classical receptive field in the layer of retinal ganglion cells (RGCs). In particular, HC modulation provides a global color correction with cone-specific lateral gain control, and the following RGCs refine the processing with iterative adaptation until all the three opponent channels reach their stable states (i.e., obtain stable outputs). Instead of explicitly estimating the scene illuminant(s), such as most existing algorithms, our model directly removes the effect of scene illuminant. Evaluations on four commonly used color constancy data sets show that the proposed model produces competitive results in comparison with the state-of-the-art methods for the scenes under either single or multiple illuminants. The results indicate that single opponency, especially the disinhibitory effect emerging in the receptive field's subunit-structured surround of RGCs, plays an important role in removing scene illuminant(s) by inherently distinguishing the spatial structures of surfaces from extensive illuminant(s).
The limited dynamic range of regular screens restricts the display of high dynamic range (HDR) images. Inspired by retinal processing mechanisms, we propose a tone mapping method to address this problem. In the retina, horizontal cells (HCs) adaptively adjust their receptive field (RF) size based on the local stimuli to regulate the visual signals absorbed by photoreceptors. Using this adaptive mechanism, the proposed method compresses the dynamic range locally in different regions, and has the capability of avoiding halo artifacts around the edges of high luminance contrast. Moreover, the proposed method introduces the center-surround antagonistic RF structure of bipolar cells (BCs) to enhance the local contrast and details. Extensive experiments show that the proposed method performs robustly well on a wide variety of images, providing competitive results against the state-of-the-art methods in terms of visual inspection, objective metrics and observer scores.
The mammalian retina seems far smarter than scientists have believed so far. Inspired by the visual processing mechanisms in the retina, from the layer of photoreceptors to the layer of retinal ganglion cells (RGCs), we propose a computational model for haze removal from a single input image, which is an important issue in the field of image enhancement. In particular, the bipolar cells serve to roughly remove the low-frequency of haze, and the amacrine cells modulate the output of cone bipolar cells to compensate the loss of details by increasing the image contrast. Then the RGCs with disinhibitory receptive field surround refine the local haze removal as well as the image detail enhancement. Results on a variety of real-world and synthetic hazy images show that the proposed model yields results comparative to or even better than the state-of-the-art methods, having the advantage of simultaneous dehazing and enhancing of single hazy image with simple and straightforward implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.