656 research outputs found
BAND SELECTION METHOD APPLIED TO M3 (MOON MINERALOGY MAPPER)
poster abstractRemote sensing optical sensors, such as those on board satellites and planetary probes, are able to detect and measure solar radiation at both im-proved spectral and spatial resolution. In particular, a hyperspectral dataset often consists of tens to hundreds of specified wavelength bands and con-tains a vast amount of spectral information for potential processing. One drawback of such a large spectral dataset is information redundancy result-ing from high correlation between narrow spectral bands. Reducing the data dimensionality is critical in practical hyperspectral remote sensing applica-tions.
Price’s method is a band selection approach that uses a small subset of bands to accurately reconstruct the full hyperspectral dataset. The method seeks to represent the dataset by a weighted sum of basis functions. An it-erative process is used to successively approximate the full dataset. The process ends when the last basis function no longer provides a significant contribution to the reconstruction of the dataset, i.e. the basis function is dominated by noise.
The research presented examines the feasibility of Price’s method for ex-tracting an optimal band subset from recently acquired lunar hyperspectral images recorded by the Moon Mineralogy Mapper (M3) instrument on board the Chandrayaan-1 spacecraft. The Apollo 17 landing site was used for test-ing of the band selection method.
Preliminary results indicate that the band selection method is able to successfully reconstruct the original hyperspectral dataset with minimal error. In a recent test case, 15 bands were used to reconstruct the original 74 bands of reflectance data. This represents an accurate reconstruction using only 20% of the original dataset.
The results from this study can help to configure spectral channels of fu-ture optical instruments for lunar exploration. The channels can be chosen based on the knowledge of which wavelength bands represent the greatest relevant information for characterizing geology of a particular location
Perceiving Illumination Inconsistencies in Scenes
The human visual system is adept at detecting and encoding statistical regularities in its spatio-temporal environment. Here we report an unexpected failure of this ability in the context of perceiving inconsistencies in illumination distributions across a scene. Contrary to predictions from previous studies [Enns and Rensink, 1990; Sun and Perona, 1996a, 1996b, 1997], we find that the visual system displays a remarkable lack of sensitivity to illumination inconsistencies, both in experimental stimuli and in images of real scenes. Our results allow us to draw inferences regarding how the visual system encodes illumination distributions across scenes. Specifically, they suggest that the visual system does not verify the global consistency of locally derived estimates of illumination direction
Recommended from our members
Onset Rivalry: The Initial Dominance Phase Is Independent Of Ongoing Perceptual Alternations
Binocular rivalry has been used to study a wide range of visual processes, from the integration of low-level features to the selection of signals that reach awareness. However, many of these studies do not distinguish between early and late phases of rivalry. There is clear evidence that the “onset” stage of rivalry is characterized by stable, yet idiosyncratic biases that are not evident in the average dominance of sustained rivalry viewing. Low-level stimulus features also have robust effects in the onset phase that are not seen in sustained rivalry, suggesting these phases may be driven at least partly by different neural mechanisms. The effects of high-level cognitive and affective factors at onset are less clear but also show differences from their effects in sustained viewing. These findings have important implications for the interpretation of any rivalry experiments using brief presentation paradigms and for understanding how the brain copes with binocular discrepancies in natural viewing conditions in which our eyes constantly move around an ever-changing environment. This review will summarize current research and explore the factors influencing this “onset” stage.Psycholog
Recommended from our members
Visual Search for Feature and Conjunction Targets with an Attention Deficit
Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.Psycholog
Anatomical Constraints on Attention: Hemifield Independence Is a Signature of Multifocal Spatial Selection
Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms.Psycholog
What Line Drawings Reveal About the Visual Brain
Scenes in the real world carry large amounts of information about color, texture, shading, illumination, and occlusion giving rise to our perception of a rich and detailed environment. In contrast, line drawings have only a sparse subset of scene contours. Nevertheless, they also trigger vivid three-dimensional impressions despite having no equivalent in the natural world. Here, we ask why line drawings work. We see that they exploit the underlying neural codes of vision and they also show that artists’ intuitions go well beyond the understanding of vision found in current neurosciences and computer vision
Do Artists See Their Retinas?
Our perception starts with the image that falls on our retina and on this retinal image, distant objects are small and shadowed surfaces are dark. But this is not what we see. Visual constancies correct for distance so that, for example, a person approaching us does not appear to become a larger person. Interestingly, an artist, when rendering a scene realistically, must undo all these corrections, making distant objects again small. To determine whether years of art training and practice have conferred any specialized visual expertise, we compared the perceptual abilities of artists to those of non-artists in three tasks. We first asked them to adjust either the size or the brightness of a target to match it to a standard that was presented on a perspective grid or within a cast shadow. We instructed them to ignore the context, judging size, for example, by imagining the separation between their fingers if they were to pick up the test object from the display screen. In the third task, we tested the speed with which artists access visual representations. Subjects searched for an L-shape in contact with a circle; the target was an L-shape, but because of visual completion, it appeared to be a square occluded behind a circle, camouflaging the L-shape that is explicit on the retinal image. Surprisingly, artists were as affected by context as non-artists in all three tests. Moreover, artists took, on average, significantly more time to make their judgments, implying that they were doing their best to demonstrate the special skills that we, and they, believed they had acquired. Our data therefore support the proposal from Gombrich that artists do not have special perceptual expertise to undo the effects of constancies. Instead, once the context is present in their drawing, they need only compare the drawing to the scene to match the effect of constancies in both
Sustained attention and the flash grab effect
Acknowledgments The research leading to these results received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007–2013)/ERC Grant Agreement No. AG324070 to PC and from NSERC Canada Discovery RGPIN-2019-03989 to PC and Leverhulme Early Career Fellowship ECF-2020-488 to NA.Peer reviewe
A shape-contrast effect for briefly presented stimuli.
When a suprathreshold visual stimulus is flashed for 60-300 ms and masked, though it is no longer visibly degraded, the perceived shape is vulnerable to distortion effects, especially when a 2rid shape is present. Specifically, when preceded by a flashed line, a briefly flashed circle appears to be an ellipse elongated perpendicular to the line. Given an appropriate stimulus onset asynchrony, this distortion isperceived when the 2 stimuli (~4*) are presented as far as 12 " apart but is not due to perception of apparent motion between the 2 stimuli. Additional pairs of shapes defined by taper and overall curvature also revealed similar nonlocal shape distortion effects. The test shapes always appeared tobe more dissimilar to the priming shapes, adistortion termed ashape-contrast effect. Its properties are consistent with the response characteristics of the shape-tuned neurons in the inferotemporal cortex and may reveal the underlying dimensions of early shape ncoding. From the instant astimulus is presented, the visual system accumulates information about the stimulus and begins to generate a subjective impression of its shape and location. For very brief presentations terminated by a mask, stimuli look fuzzy, ill defined, or intertwined with the details of the mask. Several studies have shown that for durations greater than-50 ms, the stimulus begins to have a relatively sharp, crisp appearance and is seen independently of the mask (e.g.
Where Are You Looking? Pseudogaze in Afterimages
How do we know where we are looking? A frequent assumption is that the subjective experience of our direction of gaze is assigned to the location in the world that falls on our fovea. However, we find that observers can shift their subjective direction of gaze among different nonfoveal points in an afterimage. Observers were asked to look directly at different corners of a diamond-shaped afterimage. When the requested corner was 3.5° in the periphery, the observer often reported that the image moved away in the direction of the attempted gaze shift. However, when the corner was at 1.75° eccentricity, most reported successfully fixating at the point. Eye-tracking data revealed systematic drift during the subjective fixations on peripheral locations. For example, when observers reported looking directly at a point above the fovea, their eyes were often drifting steadily upwards. We then asked observers to make a saccade from a subjectively fixated, nonfoveal point to another point in the afterimage, 7° directly below their fovea. The observers consistently reported making appropriately diagonal saccades, but the eye movement traces only occasionally followed the perceived oblique direction. These results suggest that the perceived direction of gaze can be assigned flexibly to an attended point near the fovea. This may be how the visual world acquires its stability during fixation of an object, despite the drifts and microsaccades that are normal characteristics of visual fixation
- …
