38 research outputs found

    Learning multiple rules simultaneously: affixes are more salient than reduplications

    Get PDF
    Language learners encounter numerous opportunities to learn regularities, but need to decide which of these regularities to learn, because some are not productive in their native language. Here, we present an account of rule learning based on perceptual and memory primitives (Endress, Dehaene-Lambertz, & Mehler, 2007; Endress, Nespor, & Mehler, 2009), suggesting that learners preferentially learn regularities that are more salient to them, and that the pattern of salience reflects the frequency of language features across languages. We contrast this view with previous artificial grammar learning research, which suggests that infants “choose” the regularities they learn based on rational, Bayesian criteria (Frank & Tenenbaum, 2011; Gerken, 2006, 2010). In our experiments, adult participants listened to syllable strings starting with a syllable reduplication and always ending with the same “a!x” syllable, or to syllable strings starting with this “a!x” syllable and ending with the “reduplication.” Both a!xation and reduplication are frequently used for morphological marking across languages. We find three crucial results. First, participants learned both regularities simultaneously. Second, a!xation regularities seemed easier to learn than reduplication regularities. Third, regularities in sequence o↵sets were easier to learn than regularities at sequence onsets. We show that these results are inconsistent with previous Bayesian rule learning models, but mesh well with the perceptual or memory primitives view. Further, we show that the pattern of salience revealed in our experiments reflects the distribution of regularities across languages. Ease of acquisition might thus be one determinant of the frequency of regularities across languages

    Precursors to Natural Grammar Learning: Preliminary Evidence from 4-Month-Old Infants

    Get PDF
    When learning a new language, grammar—although difficult—is very important, as grammatical rules determine the relations between the words in a sentence. There is evidence that very young infants can detect rules determining the relation between neighbouring syllables in short syllable sequences. A critical feature of all natural languages, however, is that many grammatical rules concern the dependency relation between non-neighbouring words or elements in a sentence i.e. between an auxiliary and verb inflection as in is singing. Thus, the issue of when and how children begin to recognize such non-adjacent dependencies is fundamental to our understanding of language acquisition. Here, we use brain potential measures to demonstrate that the ability to recognize dependencies between non-adjacent elements in a novel natural language is observable by the age of 4 months. Brain responses indicate that 4-month-old German infants discriminate between grammatical and ungrammatical dependencies in auditorily presented Italian sentences after only brief exposure to correct sentences of the same type. As the grammatical dependencies are realized by phonologically distinct syllables the present data most likely reflect phonologically based implicit learning mechanisms which can serve as a precursor to later grammar learning

    Beneficial effects of word final stress in segmenting a new language: evidence from ERPs

    Get PDF
    Background: How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task. Results: The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words. Conclusion: The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones

    Different neurophysiological mechanisms underlying word and rule extraction from speech

    Get PDF
    The initial process of identifying words from spoken language and the detection of more subtle regularities underlying their structure are mandatory processes for language acquisition. Little is known about the cognitive mechanisms that allow us to extract these two types of information and their specific time-course of acquisition following initial contact with a new language. We report time-related electrophysiological changes that occurred while participants learned an artificial language. These changes strongly correlated with the discovery of the structural rules embedded in the words. These changes were clearly different from those related to word learning and occurred during the first minutes of exposition. There is a functional distinction in the nature of the electrophysiological signals during acquisition: an increase in negativity (N400) in the central electrodes is related to word-learning and development of a frontal positivity (P2) is related to rule-learning. In addition, the results of an online implicit and a post-learning test indicate that, once the rules of the language have been acquired, new words following the rule are processed as words of the language. By contrast, new words violating the rule induce syntax-related electrophysiological responses when inserted online in the stream (an early frontal negativity followed by a late posterior positivity) and clear lexical effects when presented in isolation (N400 modulation). The present study provides direct evidence suggesting that the mechanisms to extract words and structural dependencies from continuous speech are functionally segregated. When these mechanisms are engaged, the electrophysiological marker associated with rule-learning appears very quickly, during the earliest phases of exposition to a new language
    corecore