40 research outputs found

    High or low? Comparing high and low-variability phonetic training in adult and child second language learners

    Get PDF
    Background High talker variability (i.e., multiple voices in the input) has been found effective in training nonnative phonetic contrasts in adults. A small number of studies suggest that children also benefit from high-variability phonetic training with some evidence that they show greater learning (more plasticity) than adults given matched input, although results are mixed. However, no study has directly compared the effectiveness of high versus low talker variability in children. Methods Native Greek-speaking eight-year-olds (N = 52), and adults (N = 41) were exposed to the English /i/-/ɪ/ contrast in 10 training sessions through a computerized word-learning game. Pre- and post-training tests examined discrimination of the contrast as well as lexical learning. Participants were randomly assigned to high (four talkers) or low (one talker) variability training conditions. Results Both age groups improved during training, and both improved more while trained with a single talker. Results of a three-interval oddity discrimination test did not show the predicted benefit of high-variability training in either age group. Instead, children showed an effect in the reverse direction—i.e., reliably greater improvements in discrimination following single talker training, even for untrained generalization items, although the result is qualified by (accidental) differences between participant groups at pre-test. Adults showed a numeric advantage for high-variability but were inconsistent with respect to voice and word novelty. In addition, no effect of variability was found for lexical learning. There was no evidence of greater plasticity for phonetic learning in child learners. Discussion This paper adds to the handful of studies demonstrating that, like adults, child learners can improve their discrimination of a phonetic contrast via computerized training. There was no evidence of a benefit of training with multiple talkers, either for discrimination or word learning. The results also do not support the findings of greater plasticity in child learners found in a previous paper (Giannakopoulou, Uther & Ylinen, 2013a). We discuss these results in terms of various differences between training and test tasks used in the current work compared with previous literature

    The time course of auditory and language-specific mechanisms in compensation for sibilant assimilation

    Get PDF
    Models of spoken-word recognition differ on whether compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫] in dress shop and classe chargée), but they differ in the strength and predominance of anticipatory versus carryover assimilation. After training, participants were presented with novel words embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language-general context effects: a contrastive effect when the assimilating context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assimilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms

    Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    Get PDF
    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks

    Auditory perceptual performance of children in the identification of contrasts between stressed vowels

    Full text link
    Purpose: To assess the auditory perceptual performance of children in a task of identification of vowel contrasts, to classify which phonemes and vowel contrasts provide higher or lower degrees of difficulty, and to verify the influence of age in this performance. Methods: Data recordings of auditory perceptual performance of 66 children in a task of identification using the software Perception Evaluation Auditive & Visuelle (PERCEVAL) were selected from a database. The task consisted of presenting sound stimuli through headphones to children, who would then choose, from two pictures arranged on the computer screen, the one corresponding to the word they heard. The time between auditory inputs and the child's reaction was automatically computed in the software. Results: The perceptual accuracy was 88% and we found a positive correlation with the variable age. The time of response was significantly longer for incorrect answers as opposed to correct answers (p=0.00). Different degrees of similarity in auditory perception were observed, where front vowels were similar more often than back vowels. The tendency for errors was prevalent in the range of non-peripheral to peripheral vowels, which suggests that the latter may serve as a reference or perceptual anchor. Conclusion: The auditory perceptual ability concerning the identification of vowel contrasts is not yet established in the age group studied. The auditory perception of vowel contrasts occurs gradually and asymmetrically, as the order of acquisition in terms of production and perception was not always the same.OBJETIVOS: Investigar o desempenho perceptivo-auditivo de crianças no tocante à identificação de contrastes entre as vogais tônicas; identificar quais fonemas e contrastes vocálicos indicam maior ou menor grau de dificuldade de identificação; e verificar se a idade influencia a acurácia perceptivo-auditiva. MÉTODOS: Foram selecionadas, de um banco de dados, informações referentes ao desempenho perceptivo-auditivo de 66 crianças em uma tarefa de identificação perceptivo-auditiva da classe das vogais tônicas do Português Brasileiro. Com o uso do software Perception Evaluation Auditive & Visuelle (PERCEVAL), foram apresentados os estímulos acústico e visual solicitando da criança a escolha da gravura correspondente à palavra apresentada auditivamente dentre duas possibilidades de imagens dispostas na tela do computador. O tempo de apresentação do estímulo e de reação das crianças foi computado automaticamente pelo software. RESULTADOS: Observou-se acurácia perceptivo-auditiva de 88% das crianças e correlação positiva com a idade. A variância do tempo de reação dos erros foi superior à dos acertos (p=0,00). Foram observados diferentes graus de similaridade perceptivo-auditiva: vogais anteriores registraram maior similaridade do que vogais posteriores. A tendência que prevaleceu nos erros foi a das vogais menos para as mais periféricas, sugerindo que estas últimas parecem servir como pontos de ancoragem na percepção. CONCLUSÃO: A habilidade perceptivo-auditiva no tocante à identificação de contrastes vocálicos ainda não está estabilizada na faixa etária estudada. O domínio perceptivo-auditivo dos contrastes vocálicos se dá de forma gradativa e assimétrica, e o paralelismo entre a ordem de aquisição em termos de produção e em termos de percepção nem sempre se manteve.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Universidade Estadual Paulista ?Júlio de Mesquita Filho School of Philosophy and SciencesUniversidade Estadual Paulista ?Júlio de Mesquita Filho School of Philosophy and SciencesFAPESP: 11/23121-2FAPESP: 13/00911-

    Different Responses to Altered Auditory Feedback in Younger and Older Adults Reflect Differences in Lexical Bias

    Full text link
    Purpose Previous work has found that both young and older adults exhibit a lexical bias in categorizing speech stimuli. In young adults, this has been argued to be an automatic influence of the lexicon on perceptual category boundaries. Older adults exhibit more top-down biases than younger adults, including an increased lexical bias. We investigated the nature of the increased lexical bias using a sensorimotor adaptation task designed to evaluate whether automatic processes drive this bias in older adults. Method A group of older adults ( n = 27) and younger adults ( n = 35) participated in an altered auditory feedback production task. Participants produced target words and nonwords under altered feedback that affected the 1st formant of the vowel. There were 2 feedback conditions that affected the lexical status of the target, such that target words were shifted to sound more like nonwords (e.g., less-liss ) and target nonwords to sound more like words (e.g., kess-kiss ). Results A mixed-effects linear regression was used to investigate the magnitude of compensation to altered auditory feedback between age groups and lexical conditions. Over the course of the experiment, older adults compensated (by shifting their production of 1st formant) more to altered auditory feedback when producing words that were shifted toward nonwords ( less-liss ) than when producing nonwords that were shifted toward words ( kess-kiss ). This is in contrast to younger adults who compensated more to nonwords that were shifted toward words compared to words that were shifted toward nonwords. Conclusion We found no evidence that the increased lexical bias previously observed in older adults is driven by a greater sensitivity to top-down lexical influence on perceptual category boundaries. We suggest the increased lexical bias in older adults is driven by postperceptual processes that arise as a result of age-related cognitive and sensory changes. </jats:sec
    corecore