1,245 research outputs found

    Across-frequency combination of interaural time difference in bilateral cochlear implant listeners

    Get PDF
    The current study examined how cochlear implant (CI) listeners combine temporally interleaved envelope-ITD information across two sites of stimulation. When two cochlear sites jointly transmit ITD information, one possibility is that CI listeners can extract the most reliable ITD cues available. As a result, ITD sensitivity would be sustained or enhanced compared to single-site stimulation. Alternatively, mutual interference across multiple sites of ITD stimulation could worsen dual-site performance compared to listening to the better of two electrode pairs. Two experiments used direct stimulation to examine how CI users can integrate ITDs across two pairs of electrodes. Experiment 1 tested ITD discrimination for two stimulation sites using 100-Hz sinusoidally modulated 1000-pps-carrier pulse trains. Experiment 2 used the same stimuli ramped with 100 ms windows, as a control condition with minimized onset cues. For all stimuli, performance improved monotonically with increasing modulation depth. Results show that when CI listeners are stimulated with electrode pairs at two cochlear sites, sensitivity to ITDs was similar to that seen when only the electrode pair with better sensitivity was activated. None of the listeners showed a decrement in performance from the worse electrode pair. This could be achieved either by listening to the better electrode pair or by truly integrating the information across cochlear sites

    A Decidable Characterization of a Graphical Pi-calculus with Iterators

    Full text link
    This paper presents the Pi-graphs, a visual paradigm for the modelling and verification of mobile systems. The language is a graphical variant of the Pi-calculus with iterators to express non-terminating behaviors. The operational semantics of Pi-graphs use ground notions of labelled transition and bisimulation, which means standard verification techniques can be applied. We show that bisimilarity is decidable for the proposed semantics, a result obtained thanks to an original notion of causal clock as well as the automatic garbage collection of unused names.Comment: In Proceedings INFINITY 2010, arXiv:1010.611

    Free submonoids and minimal ω-generators of Rω

    Get PDF
    Let A be an alphabet and let R be a language in A+. An (¿-generator of -R" is a language G such that G" = R". The language Stab(-R") = {u G A* : ttiZ" Ç R"} is a submonoid of A*. We give results concerning the wgenerators for the case when Stab(Ru ) is a free submonoid which are not available in the general case. In particular, we prove that every ((»-generator of 22" contains at least one minimal w-generator of R". Furthermore these minimal w-generators are codes. We also characterize the w-languagea having only finite languages as minimal u-generators. Finally, we characterize the w- languages »-generated by finite prefix codes

    Decoding neural responses to temporal cues for sound localization

    Get PDF
    The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001

    Heat Transfer Mechanisms in Porous Materials and Contemporary Problems in Thermophysical Properties Investigations: Analyses and Solutions

    Get PDF
    This article is an overview of the topical problems in the investigation of thermophysical properties and the development of a database for porous materials. Determination of both apparent/measured and true thermophysical properties is discussed taking into account combined heat and mass transfer, latent heat effects during chemical and physical transformations, as well as structural changes. The approaches to the solution of these problems are demonstrated for a number of different classes of materials: Industrial refractories, ceramics, highly porous insulation; Moist materials and materials undergoing phase, chemical and structural transformations; Materials semitransparent for heat radiation. The approaches being used in the development of a thermophysical properties database consist in a combination of theoretical and experimental methods. The analysis, generalization, and extrapolation of available reference data can be conducted based on the models for classical (conduction, heat radiation, gas convection) and additional (novel) mechanisms and processes affecting the apparent thermophysical properties. The novel heat transfer mechanisms include: Heterogeneous heat and mass transfer processes occurring in pores existing at grain boundaries and in cracks, in particular, surface segregation and diffusion of impurities on pore surfaces and transport of gases produced from chemical reactions, evaporation, and sublimation. Microstructure changes due to non-uniform thermal expansion of particles and grains. These changes are caused by the mismatch of thermal expansion coefficients of different phases in the material and anisotropic thermal expansion of crystals

    Hemispheric asymmetry of endogenous neural oscillations in young children: implications for hearing speech in noise

    Get PDF
    Speech signals contain information in hierarchical time scales, ranging from short-duration (e.g., phonemes) to long-duration cues (e.g., syllables, prosody). A theoretical framework to understand how the brain processes this hierarchy suggests that hemispheric lateralization enables specialized tracking of acoustic cues at different time scales, with the left and right hemispheres sampling at short (25 ms; 40 Hz) and long (200 ms; 5 Hz) periods, respectively. In adults, both speech-evoked and endogenous cortical rhythms are asymmetrical: low-frequency rhythms predominate in right auditory cortex, and high-frequency rhythms in left auditory cortex. It is unknown, however, whether endogenous resting state oscillations are similarly lateralized in children. We investigated cortical oscillations in children (3–5 years; N = 65) at rest and tested our hypotheses that this temporal asymmetry is evident early in life and facilitates recognition of speech in noise. We found a systematic pattern of increasing leftward asymmetry for higher frequency oscillations; this pattern was more pronounced in children who better perceived words in noise. The observed connection between left-biased cortical oscillations in phoneme-relevant frequencies and speech-in-noise perception suggests hemispheric specialization of endogenous oscillatory activity may support speech processing in challenging listening environments, and that this infrastructure is present during early childhood

    Signal Transmission in the Auditory System

    Get PDF
    Contains table of contents for Section 3 and reports on four research projects.National Institutes of Health Grant R01 DC00194National Institutes of Health Grant P01 DC00119National Science Foundation Grant IBN 96-04642W.M. Keck Foundation Career Development ProfessorshipNational Institutes of Health Grant R01 DC00238Thomas and Gerd Perkins Award ProfessorshipAlfred P Sloan Foundation Instrumentation GrantJohn F. and Virginia B. Taplin Award in Health Sciences and TechnologyNational Institutes of Health/National Institute of Deafness and Other Communication DisordersNational Institutes of Health/National Institute of Deafness and Other Communication Disorders Grant PO1 DC0011

    Benefits to speech perception in noise from the binaural integration of electric and acoustic signals in simulated unilateral deafness

    Get PDF
    Objectives: This study used vocoder simulations with normal-hearing (NH) listeners to (a) measure their ability to integrate speech information from a NH ear and a simulated cochlear implant (CI); and (b) investigate whether binaural integration is disrupted by a mismatch in the delivery of spectral information between the ears arising from a misalignment in the mapping of frequency to place. Design: Eight NH volunteers participated in the study and listened to sentences embedded in background noise via headphones. Stimuli presented to the left ear were unprocessed. Stimuli presented to the right ear (referred to as the CI-simulation ear) were processed using an 8-channel noise vocoder with one of three processing strategies. An Ideal strategy simulated a frequency-to-place map across all channels that matched the delivery of spectral information between the ears. A Realistic strategy created a misalignment in the mapping of frequency to place in the CI-simulation ear where the size of the mismatch between the ears varied across channels. Finally, a Shifted strategy imposed a similar degree of misalignment in all channels resulting in consistent mismatch between the ears across frequency. The ability to report key words in sentences was assessed under monaural and binaural listening conditions and at signal-to-noise ratios (SNRs) established by estimating speech-reception thresholds in each ear alone. The SNRs ensured that the monaural performance of the left ear never exceeded that of the CI-simulation ear. Binaural integration advantages were calculated by comparing binaural performance with monaural performance using the CI-simulation ear alone. Thus, these advantages reflected the additional use of the experimentally-constrained left ear and were not attributable to better-ear listening. Results: Binaural performance was as accurate as, or more accurate than, monaural performance with the CI-simulation ear alone. When both ears supported a similar level of monaural performance (50%), binaural integration advantages were found regardless of whether a mismatch was simulated or not. When the CI-simulation ear supported a superior level of monaural performance (71%), evidence of binaural integration was absent when a mismatch was simulated using both the Realistic and Ideal processing strategies. This absence of integration could not be accounted for by ceiling effects or by changes in SNR. Conclusions: If generalizable to unilaterally-deaf CI users, the results of the current simulation study would suggest that benefits to speech perception in noise can be obtained by integrating information from an implanted ear and a normal-hearing ear. A mismatch in the delivery of spectral information between the ears due to a misalignment in the mapping of frequency to place may disrupt binaural integration in situations where both ears cannot support a similar level of monaural speech understanding. Previous studies which have measured the speech perception of unilaterally-deaf individuals after cochlear implantation but with non-individualized frequency-to-electrode allocations may therefore have underestimated the potential benefits of providing binaural hearing. However, it remains unclear whether the size and nature of the potential incremental benefits from individualized allocations are sufficient to justify the time and resources required to derive them based on cochlear imaging or pitch-matching tasks
    corecore