204 research outputs found

    Speech Minus Spectrum Equals Time – Or What the Left Hemisphere is For

    Get PDF
    Proceedings of the 4th Annual Meeting of the Berkeley Linguistics Society (1978), pp. 500-51

    Observer weighting of interaural cues in positive and negative envelope slopes of amplitude-modulated waveforms

    Get PDF
    The auditory system can encode interaural delays in highpass-filtered complex sounds by phase locking to their slowly modulating envelopes. Spectrotemporal analysis of interaurally time-delayed highpass waveforms reveals the presence of a concomitant interaural level cue. The current study systematically investigated the contribution of time and concomitant level cues carried by positive and negative envelope slopes of a modified sinusoidally amplitude-modulated (SAM) high-frequency carrier. The waveforms were generated from concatenation of individual modulation cycles whose envelope peaks were extended by the desired interaural delay, allowing independent control of delays in the positive and negative modulation slopes. In experiment 1, thresholds were measured using a 2-interval forced-choice adaptive task for interaural delays in either the positive or negative modulation slopes. In a control condition, thresholds were measured for a standard SAM tone. In experiment 2, decision weights were estimated using a multiple-observation correlational method in a single-interval forced-choice task for interaural delays carried simultaneously by the positive, and independently, negative slopes of the modulation envelope. In experiment 3, decision weights were measured for groups of 3 modulation cycles at the start, middle, and end of the waveform to determine the influence of onset dominance or recency effects. Results were consistent across experiments: thresholds were equal for the positive and negative modulation slopes. Decision weights were positive and equal for the time cue in the positive and negative envelope slopes. Weights were also larger for modulations cycles near the waveform onset. Weights estimated for the concomitant interaural level cue were positive for the positive envelope slope and negative for the negative slope, consistent with exclusive use of time cues.We thank Virginia M. Richards and Bruce G. Berg for helpful discussions. We also thank Brian C. J. Moore and an anonymous reviewer for their insightful comments on an earlier draft of the manuscript. Work supported by grants from the National Science Council, Taiwan NSC 98-2410-H-008-081-MY3 and NIH R01DC009659

    Multisensory Perceptual Learning of Temporal Order: Audiovisual Learning Transfers to Vision but Not Audition

    Get PDF
    Background: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Methodology/Principal Findings: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes

    Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training

    Get PDF
    Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy.We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing.Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production
    corecore