260 research outputs found
Auditory training strategies for adult users of cochlear implants
There has been growing interest recently in whether computer-based training can improve speech perception among users of cochlear implants (Fu et al., 2005; Oba et al., 2011; Ingvalson et al., 2013). This paper reports a series of experiments which first evaluated the effectiveness of different training strategies with normal-hearing participants who listened to noise-vocoded speech, before conducting a small-scale study with users of cochlear implants. Our vocoder studies revealed (1) that ‘High-Variability’ training led to greater generalisation to new talkers than training with a single talker, and (2) that word-and sentence-based training materials led to greater improvements than an approach based on phonemes in nonsense syllables. Informed by these findings, we evaluated the effectiveness of a computer-based training package that included word-and sentence-based tasks, with materials recorded by 20 talkers. We found good compliance with the training protocol, with 8 out of the 11 participants completing 15 hours of training as instructed. Following training, there was a significant improvement on a consonant test, but in general the improvements were small, highly variable, and not statistically significant. A large-scale randomised controlled trial is needed before we can be confident that computer-based auditory training is worthwhile for users of cochlear implants
Comparison of word-, sentence, and phoneme-based training strategies in improving the perception of spectrally-distorted speech
Purpose: To compare the effectiveness of three self-administered strategies for auditory training that might improve speech perception by adult users of cochlear implants. The strategies are based, respectively, on discriminating isolated words, words in sentences, and phonemes in nonsense syllables. Method: Participants were 18 normally-hearing adults who listened to speech processed by a noise-excited vocoder to simulate the information provided by a cochlear implant. They were assigned randomly to word-, sentence-, or phoneme-based training and underwent nine 20-minute training sessions on separate days over a 2-3-week period. The effectiveness of training was assessed as the improvement in accuracy of discriminating vowels and consonants, and identifying words in sentences, relative to participants’ best performance in repeated tests prior to training. Results: Word- and sentence-based training led to significant improvements in the ability to identify words in sentences that were significantly larger than the improvements produced by phoneme-based training. There were no significant differences between the effectiveness of word- and sentence-based training. No significant improvements in consonant or vowel discrimination were found for the sentence- or phoneme-based training groups, but some improvements were found for the word-based training group. Conclusions: The word- and sentence-based training strategies were more effective than the phoneme-based strategy at improving the perception of spectrally-distorted speech
Matching novel face and voice identity using static and dynamic facial images
Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face – voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others
Searching for a talking face: the effect of degrading the auditory signal
Previous research (e.g. McGurk and MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius and Soto-Faraco, 2011). We hypothesised that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners
The effect of inserting an inter-stimulus interval in face-voice matching tasks
Voices and static faces can be matched for identity above chance level. No previous face- voice matching experiments have included an inter-stimulus interval (ISI) exceeding 1 second. We tested whether accurate identity decisions rely on high-quality perceptual representations temporarily stored in sensory memory, and therefore whether the ability to make accurate matching decisions diminishes as the ISI increases. In each trial, participants had to decide whether an unfamiliar face and voice belonged to the same person. The face and voice stimuli were presented simultaneously in Experiment 1, there was a 5 second ISI in Experiment 2, and a 10 second interval in Experiment 3. The results, analysed using multilevel modelling, revealed that static face-voice matching was significantly above chance level only when the stimuli were presented simultaneously (Experiment 1). The overall bias to respond same identity weakened as the interval increased, suggesting that this bias is explained by temporal contiguity. Taken together, the findings highlight that face-voice matching performance is reliant on comparing fast-decaying, high-quality perceptual representations. The results are discussed in terms of social functioning
Does training with amplitude modulated tones affect tone-vocoded speech perception?
Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored
Inquiry pedagogy to promote emerging proportional reasoning in primary students
Proportional reasoning as the capacity to compare situations in relative (multiplicative) rather than absolute (additive) terms is an important outcome of primary school mathematics. Research suggests that students tend to see comparative situations in additive rather than multiplicative terms and this thinking can influence their capacity for proportional reasoning in later years. In this paper, excerpts from a classroom case study of a fourth-grade classroom (students aged 9) are presented as they address an inquiry problem that required proportional reasoning. As the inquiry unfolded, students' additive strategies were progressively seen to shift to proportional thinking to enable them to answer the question that guided their inquiry. In wrestling with the challenges they encountered, their emerging proportional reasoning was supported by the inquiry model used to provide a structure, a classroom culture of inquiry and argumentation, and the proportionality embedded in the problem context
The Muslim headscarf and face perception: "they all look the same, don't they?"
YesThe headscarf conceals hair and other external features of a head (such as the ears). It therefore may have implications for the way in which such faces are perceived. Images of faces with hair (H) or alternatively, covered by a headscarf (HS) were used in three experiments. In Experiment 1 participants saw both H and HS faces in a yes/no recognition task in which the external features either remained the same between learning and test (Same) or switched (Switch). Performance was similar for H and HS faces in both the Same and Switch condition, but in the Switch condition it dropped substantially compared to the Same condition. This implies that the mere presence of the headscarf does not reduce performance, rather, the change between the type of external feature (hair or headscarf) causes the drop in performance. In Experiment 2, which used eye-tracking methodology, it was found that almost all fixations were to internal regions, and that there was no difference in the proportion of fixations to external features between the Same and Switch conditions, implying that the headscarf influenced processing by virtue of extrafoveal viewing. In Experiment 3, similarity ratings of the internal features of pairs of HS faces were higher than pairs of H faces, confirming that the internal and external features of a face are perceived as a whole rather than as separate components.The Educational Charity of the Federation of Ophthalmic and Dispensing Opticians
Moving out of the shadows: accomplishing bisexual motherhood
Our qualitative study explored the ways in which bisexual mothers came to identify as such and how they structured their relationships and
parenting within hetero-patriarchal society. The experiences of seven self-identified White bisexual women (aged from 28 to 56-years-old) from
across England and the Republic of Ireland were investigated through semi-structured interviews. Participants’ children were aged 8 months to
28 years old at the time of their interviews. A thematic narrative analysis highlighted the following issues that participants had encountered in
constructing their self-identity: prioritizing children; connecting and disconnecting with others and finessing self-definition; questioning societal
relationship expectations. Nevertheless, participants varied considerably in how each of the themes identified were reflected in their lives, in
particular depending upon each participant’s interpretation of her local social context. Both motherhood and self-identifying as bisexual gave a
sense of meaning and purpose to participants’ life stories, although participants sometimes foregrounded their commitment to their children
even at a personal cost to their bisexual identity. Using three different theoretical perspectives from feminist theory, queer theory and life course
theory, the narratives analysed revealed ways in which bisexual motherhood not only had been influenced both intentionally and unintentionally
by heteronormative expectations but also had directly and indirectly challenged these expectations
The Diagnostic Potential of Fe Lines Applied to Protostellar Jets
We investigate the diagnostic capabilities of iron lines for tracing the physical conditions of shock-excited gas in jets driven by pre-main sequence stars. We have analyzed the 3000-25000 \uc5, X-shooter spectra of two jets driven by the pre-main sequence stars ESO-H\u3b1 574 and Par-Lup 3-4. Both spectra are very rich in [Fe II] lines over the whole spectral range; in addition, lines from [Fe III] are detected in the ESO-H\u3b1 574 spectrum. Non-local thermal equilibrium codes solving the equations of the statistical equilibrium along with codes for the ionization equilibrium are used to derive the gas excitation conditions of electron temperature and density and fractional ionization. An estimate of the iron gas-phase abundance is provided by comparing the iron lines emissivity with that of neutral oxygen at 6300 \uc5. The [Fe II] line analysis indicates that the jet driven by ESO-H\u3b1 574 is, on average, colder (T e 3c 9000 K), less dense (n e 3c 2
7 104 cm-3), and more ionized (x e 3c 0.7) than the Par-Lup 3-4 jet (T e 3c 13,000 K, n e 3c 6
7 104 cm-3, x e < 0.4), even if the existence of a higher density component (n e 3c 2
7 105 cm-3) is probed by the [Fe III] and [Fe II] ultra-violet lines. The physical conditions derived from the iron lines are compared with shock models suggesting that the shock at work in ESO-H\u3b1 574 is faster and likely more energetic than the Par-Lup 3-4 shock. This latter feature is confirmed by the high percentage of gas-phase iron measured in ESO-H\u3b1 574 (50%-60% of its solar abundance in comparison with less than 30% in Par-Lup 3-4), which testifies that the ESO-H\u3b1 574 shock is powerful enough to partially destroy the dust present inside the jet. This work demonstrates that a multiline Fe analysis can be effectively used to probe the excitation and ionization conditions of the gas in a jet without any assumption on ionic abundances. The main limitation on the diagnostics resides in the large uncertainties of the atomic data, which, however, can be overcome through a statistical approach involving many line
- …
