29 research outputs found
Contour extracting networks in early extrastriate cortex
Neurons in the visual cortex process a local region of visual space, but in order to adequately analyze natural images, neurons need to interact. The notion of an ?association field? proposes that neurons interact to extract extended contours. Here, we identify the site and properties of contour integration mechanisms. We used functional magnetic resonance imaging (fMRI) and population receptive field (pRF) analyses. We devised pRF mapping stimuli consisting of contours. We isolated the contribution of contour integration mechanisms to the pRF by manipulating the contour content. This stimulus manipulation led to systematic changes in pRF size. Whereas a bank of Gabor filters quantitatively explains pRF size changes in V1, only V2/V3 pRF sizes match the predictions of the association field. pRF size changes in later visual field maps, hV4, LO-1, and LO-2 do not follow either prediction and are probably driven by distinct classical receptive field properties or other extraclassical integration mechanisms. These pRF changes do not follow conventional fMRI signal strength measures. Therefore, analyses of pRF changes provide a novel computational neuroimaging approach to investigating neural interactions. We interpreted these results as evidence for neural interactions along co-oriented, cocircular receptive fields in the early extrastriate visual cortex (V2/V3), consistent with the notion of a contour association field
Relative contributions to vergence eye movements of two binocular cues for motion-in-depth
When we track an object moving in depth, our eyes rotate in opposite directions. This type of "disjunctive" eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal
On the Inverse Problem of Binocular 3D Motion Perception
It is shown that existing processing schemes of 3D motion perception such as interocular velocity difference, changing disparity over time, as well as joint encoding of motion and disparity, do not offer a general solution to the inverse optics problem of local binocular 3D motion. Instead we suggest that local velocity constraints in combination with binocular disparity and other depth cues provide a more flexible framework for the solution of the inverse problem. In the context of the aperture problem we derive predictions from two plausible default strategies: (1) the vector normal prefers slow motion in 3D whereas (2) the cyclopean average is based on slow motion in 2D. Predicting perceived motion directions for ambiguous line motion provides an opportunity to distinguish between these strategies of 3D motion processing. Our theoretical results suggest that velocity constraints and disparity from feature tracking are needed to solve the inverse problem of 3D motion perception. It seems plausible that motion and disparity input is processed in parallel and integrated late in the visual processing hierarchy
Recommended from our members
Connectionist simulations with a dual route model of fear conditioning
Recommended from our members
Systematic misperceptions of 3-D motion explained by bayesian inference
People make surprising but reliable perceptual errors. Here, we provide a unified explanation for systematic errors in the perception of three-dimensional (3-D) motion. To do so, we characterized the binocular retinal motion signals produced by objects moving through arbitrary locations in 3-D. Next, we developed a Bayesian model, treating 3-D motion perception as optimal inference given sensory noise in the measurement of retinal motion. The model predicts a set of systematic perceptual errors, which depend on stimulus distance, contrast, and eccentricity. We then used a virtual-reality headset as well as a standard 3-D desktop stereoscopic display to test these predictions in a series of perceptual experiments. As predicted, we found evidence that errors in 3-D motion perception depend on the contrast, viewing distance, and eccentricity of a stimulus. These errors include a lateral bias in perceived motion direction and a surprising tendency to misreport approaching motion as receding and vice versa. In sum, we present a Bayesian model that provides a parsimonious account for a range of systematic misperceptions of motion in naturalistic environments
