142 research outputs found

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Visual adaptation enhances action sound discrimination

    Get PDF
    Prolonged exposure, or adaptation, to a stimulus in one modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in one modality can bias perception in another modality. Here we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory or audiovisual hand actions enhanced discrimination between two subsequently presented hand action sounds. Discrimination was most enhanced when the visual action ‘matched’ the auditory action. In addition, prior adaptation to a visual, auditory or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation induced crossmodal enhancements cannot be explained by post-perceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Perception of Loudness Is Influenced by Emotion

    Get PDF
    Loudness perception is thought to be a modular system that is unaffected by other brain systems. We tested the hypothesis that loudness perception can be influenced by negative affect using a conditioning paradigm, where some auditory stimuli were paired with aversive experiences while others were not. We found that the same auditory stimulus was reported as being louder, more negative and fear-inducing when it was conditioned with an aversive experience, compared to when it was used as a control stimulus. This result provides support for an important role of emotion in auditory perception

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    Multisensory Integration and Attention in Autism Spectrum Disorder: Evidence from Event-Related Potentials

    Get PDF
    Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI) can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs) suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs) following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy

    Sensory information in perceptual-motor sequence learning: visual and/or tactile stimuli

    Get PDF
    Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge

    Restricted Attentional Capacity within but Not between Sensory Modalities: An Individual Differences Approach

    Get PDF
    Background Most people show a remarkable deficit to report the second of two targets when presented in close temporal succession, reflecting an attentional blink (AB). An aspect of the AB that is often ignored is that there are large individual differences in the magnitude of the effect. Here we exploit these individual differences to address a long-standing question: does attention to a visual target come at a cost for attention to an auditory target (and vice versa)? More specifically, the goal of the current study was to investigate a) whether individuals with a large within-modality AB also show a large cross-modal AB, and b) whether individual differences in AB magnitude within different modalities correlate or are completely separate. Methodology/Principal Findings While minimizing differential task difficulty and chances for a task-switch to occur, a significant AB was observed when targets were both presented within the auditory or visual modality, and a positive correlation was found between individual within-modality AB magnitudes. However, neither a cross-modal AB nor a correlation between cross-modal and within-modality AB magnitudes was found. Conclusion/Significance The results provide strong evidence that a major source of attentional restriction must lie in modality-specific sensory systems rather than a central amodal system, effectively settling a long-standing debate. Individuals with a large within-modality AB may be especially committed or focused in their processing of the first target, and to some extent that tendency to focus could cross modalities, reflected in the within-modality correlation. However, what they are focusing (resource allocation, blocking of processing) is strictly within-modality as it only affects the second target on within-modality trials. The findings show that individual differences in AB magnitude can provide important information about the modular structure of human cognition
    corecore