4,513 research outputs found

    Sound predictability as a higher-order cue in auditory scene analysis

    Get PDF
    A major challenge for the auditory system is to disentangle signals emitted by two or more sound sources that are active in a temporally interleaved manner (sequential stream segregation). Besides distinct characteristics of the individual signals (e.g., their timbre, location, and pitch), one important cue for distinguishing the sound sources is how their emitted signals unfold over time. It seems intuitively plausible that signals that unfold predictably with respect to their acoustic features and time-points of occurrence, such as the repetitive signature of a train moving on the rails, can be more readily identified as originating from one sound source. Based on this rationale, predictive elements have successfully been incorporated into computational models of auditory scene analysis for many years

    Predictability effects in auditory scene analysis: a review

    Get PDF
    Many sound sources emit signals in a predictable manner. The idea that predictability can be exploited to support the segregation of one source's signal emissions from the overlapping signals of other sources has been expressed for a long time. Yet experimental evidence for a strong role of predictability within auditory scene analysis (ASA) has been scarce. Recently, there has been an upsurge in experimental and theoretical work on this topic resulting from fundamental changes in our perspective on how the brain extracts predictability from series of sensory events. Based on effortless predictive processing in the auditory system, it becomes more plausible that predictability would be available as a cue for sound source decomposition. In the present contribution, empirical evidence for such a role of predictability in ASA will be reviewed. It will be shown that predictability affects ASA both when it is present in the sound source of interest (perceptual foreground) and when it is present in other sound sources that the listener wishes to ignore (perceptual background). First evidence pointing toward age-related impairments in the latter capacity will be addressed. Moreover, it will be illustrated how effects of predictability can be shown by means of objective listening tests as well as by subjective report procedures, with the latter approach typically exploiting the multi-stable nature of auditory perception. Critical aspects of study design will be delineated to ensure that predictability effects can be unambiguously interpreted. Possible mechanisms for a functional role of predictability within ASA will be discussed, and an analogy with the old-plus-new heuristic for grouping simultaneous acoustic signals will be suggested

    Modulation-frequency acts as a primary cue for auditory stream segregation

    Get PDF
    In our surrounding acoustic world sounds are produced by different sources and interfere with each other before arriving to the ears. A key function of the auditory system is to provide consistent and robust descriptions of the coherent sound groupings and sequences (auditory objects), which likely correspond to the various sound sources in the environment. This function has been termed auditory stream segregation. In the current study we tested the effects of separation in the frequency of amplitude modulation on the segregation of concurrent sound sequences in the auditory stream-segregation paradigm (van Noorden 1975). The aim of the study was to assess 1) whether differential amplitude modulation would help in separating concurrent sound sequences and 2) whether this cue would interact with previously studied static cues (carrier frequency and location difference) in segregating concurrent streams of sound. We found that amplitude modulation difference is utilized as a primary cue for the stream segregation and it interacts with other primary cues such as frequency and location difference

    Different roles of similarity and predictability in auditory stream segregation

    Get PDF
    Sound sources often emit trains of discrete sounds, such as a series of footsteps. Previously, two dif¬ferent principles have been suggested for how the human auditory system binds discrete sounds to¬gether into perceptual units. The feature similarity principle is based on linking sounds with similar characteristics over time. The predictability principle is based on linking sounds that follow each other in a predictable manner. The present study compared the effects of these two principles. Participants were presented with tone sequences and instructed to continuously indicate whether they perceived a single coherent sequence or two concurrent streams of sound. We investigated the influence of separate manipulations of similarity and predictability on these perceptual reports. Both grouping principles affected perception of the tone sequences, albeit with different characteristics. In particular, results suggest that whereas predictability is only analyzed for the currently perceived sound organization, feature similarity is also analyzed for alternative groupings of sound. Moreover, changing similarity or predictability within an ongoing sound sequence led to markedly different dynamic effects. Taken together, these results provide evidence for different roles of similarity and predictability in auditory scene analysis, suggesting that forming auditory stream representations and competition between alter¬natives rely on partly different processes

    Weed hosts of the cotton whitefly (Bemisia tabaci (Genn.)) Homoptera Aleyrodidae

    Get PDF

    Major weed hosts of nematodes in crop production

    Get PDF

    Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: An event-related potential (ERP) study

    Get PDF
    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: One or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects an integrated, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound

    Using binocular rivalry to tag foreground sounds: Towards an objective visual measure for auditory multistability

    Get PDF
    In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience (“no-report paradigms”). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (1008 Hz) or low-pitch (400 Hz) tones. Interstimulus intervals were either fixed per sequence (Experiments 1 and 2) or random with tones alternating (Experiment 3). A horizontally drifting grating was presented to each eye; to induce binocular rivalry, gratings had distinct colors and motion directions. To associate each grating with one tone sequence, a pattern on the grating jumped vertically whenever the respective tone occurred. We found that the direction of the optokinetic nystagmus (OKN)—induced by the visually dominant grating—could be used to decode the tone (high/low) that was perceived in the foreground well above chance. This OKN-based readout improved after observers had gained experience with the auditory task (Experiments 1 and 2) and for simpler auditory tasks (Experiment 3). We found no evidence that the visual stimulus affected auditory multistability. Although decoding performance is still far from perfect, our paradigm may eventually provide a continuous estimate of the currently dominant percept in auditory multistability
    corecore