998 research outputs found
Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices.
Computational theories propose that attention modulates the topographical landscape of spatial 'priority' maps in regions of the visual cortex so that the location of an important object is associated with higher activation levels. Although studies of single-unit recordings have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here we used functional magnetic resonance imaging and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size
Fluctuations in instantaneous frequency predict alpha amplitude during visual perception.
Rhythmic neural activity in the alpha band (8-13 Hz) is thought to have an important role in the selective processing of visual information. Typically, modulations in alpha amplitude and instantaneous frequency are thought to reflect independent mechanisms impacting dissociable aspects of visual information processing. However, in complex systems with interacting oscillators such as the brain, amplitude and frequency are mathematically dependent. Here, we record electroencephalography in human subjects and show that both alpha amplitude and instantaneous frequency predict behavioral performance in the same visual discrimination task. Consistent with a model of coupled oscillators, we show that fluctuations in instantaneous frequency predict alpha amplitude on a single trial basis, empirically demonstrating that these metrics are not independent. This interdependence suggests that changes in amplitude and instantaneous frequency reflect a common change in the excitatory and inhibitory neural activity that regulates alpha oscillations and visual information processing
Building on a Solid Baseline: Anticipatory Biases in Attention.
A brain-imaging paper by Kastner and colleagues in 1999 was the first to demonstrate that merely focusing attention at a spatial location changed the baseline activity level in various regions of human visual cortex even before any stimuli appeared. The study provided a touchstone for investigating cognitive-sensory interactions and understanding the proactive endogenous signals that shape perception
The positional-specificity effect reveals a passive-trace contribution to visual short-term memory.
The positional-specificity effect refers to enhanced performance in visual short-term memory (VSTM) when the recognition probe is presented at the same location as had been the sample, even though location is irrelevant to the match/nonmatch decision. We investigated the mechanisms underlying this effect with behavioral and fMRI studies of object change-detection performance. To test whether the positional-specificity effect is a direct consequence of active storage in VSTM, we varied memory load, reasoning that it should be observed for all objects presented in a sub-span array of items. The results, however, indicated that although robust with a memory load of 1, the positional-specificity effect was restricted to the second of two sequentially presented sample stimuli in a load-of-2 experiment. An additional behavioral experiment showed that this disruption wasn't due to the increased load per se, because actively processing a second object--in the absence of a storage requirement--also eliminated the effect. These behavioral findings suggest that, during tests of object memory, position-related information is not actively stored in VSTM, but may be retained in a passive tag that marks the most recent site of selection. The fMRI data were consistent with this interpretation, failing to find location-specific bias in sustained delay-period activity, but revealing an enhanced response to recognition probes that matched the location of that trial's sample stimulus
A computer vision model for visual-object-based attention and eye movements
This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda-
tion of Chin
Reading the mind's eye: Decoding category information during mental imagery
Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity
Recommended from our members
Mixing and mingling in visual working memory: Inter-item competition is feature-specific during encoding and feature-general during maintenance.
Visual working memory (WM) is a central cognitive ability but is capacity-limited due to competition between remembered items. Understanding whether inter-item competition depends on the similarity of the features being remembered has important implications for determining if competition occurs in sensory or post-sensory stages of processing. Experiment 1 compared the precision of WM across homogeneous displays, where items belonged to the same feature type (e.g., colorful circles), and heterogeneous displays (e.g., colorful circles and oriented bars). Performance was better for heterogeneous displays, suggesting a feature-specific component of interference. However, Experiment 2 used a retro-cueing task to isolate encoding from online maintenance and revealed that inter-item competition during storage was not feature-specific. The data support recent models of WM in which inter-item interference - and hence capacity limits in WM - occurs in higher-order structures that receive convergent input from a diverse array of feature-specific representations
Recommended from our members
The Importance of Considering Model Choices When Interpreting Results in Computational Neuroimaging.
Model-based analyses open exciting opportunities for understanding neural information processing. In a commentary published in eNeuro, Gardner and Liu (2019) discuss the role of model specification in interpreting results derived from complex models of neural data. As a case study, they suggest that one such analysis, the inverted encoding model (IEM), should not be used to assay properties of stimulus representations because the ability to apply linear transformations at various stages of the analysis procedure renders results arbitrary. Here, we argue that the specification of all models is arbitrary to the extent that an experimenter makes choices based on current knowledge of the model system. However, the results derived from any given model, such as the reconstructed channel response profiles obtained from an IEM analysis, are uniquely defined and are arbitrary only in the sense that changes in the model can predictably change results. IEM-based channel response profiles should therefore not be considered arbitrary when the model is clearly specified and guided by our best understanding of neural population representations in the brain regions being analyzed. Intuitions derived from this case study are important to consider when interpreting results from all model-based analyses, which are similarly contingent upon the specification of the models used
Building on a solid baseline: Anticipatory biases in attention
We revisit a seminal study using brain imaging to investigate spatial attention in human visual cortex. We reflect on the study's important novel contributions at the time and illustrate how subsequent studies have built on its legacy to change our understanding of the neural basis of attention
Motor Preparatory Activity in Posterior Parietal Cortex is Modulated by Subjective Absolute Value
For optimal response selection, the consequences associated with behavioral success or failure must be appraised. To determine how monetary consequences influence the neural representations of motor preparation, human brain activity was scanned with fMRI while subjects performed a complex spatial visuomotor task. At the beginning of each trial, reward context cues indicated the potential gain and loss imposed for correct or incorrect trial completion. FMRI-activity in canonical reward structures reflected the expected value related to the context. In contrast, motor preparatory activity in posterior parietal and premotor cortex peaked in high “absolute value” (high gain or loss) conditions: being highest for large gains in subjects who believed they performed well while being highest for large losses in those who believed they performed poorly. These results suggest that the neural activity preceding goal-directed actions incorporates the absolute value of that action, predicated upon subjective, rather than objective, estimates of one's performance
- …
