181 research outputs found

    Neurocognitive mechanisms for processing inflectional and derivational complexity in English

    Get PDF
    In the current paper we discuss the mechanisms that underlie the processing of inflectional and derivational complexity in English. We address this issue from a neurocognitive perspective and present evidence from a new fMRI study that the two types of morphological complexity engage the language processing network in different ways. The processing of inflectional complexity selectively activates a left-lateralised frontotemporal system, specialised for combinatorial grammatical computations, while derivational complexity primarily engages a distributed bilateral system, argued to support whole-word, stem based lexical access. We discuss the implications of our findings for theories of the processing and representation of morphologically complex words

    Structure, form, and meaning in the mental lexicon: evidence from Arabic.

    Get PDF
    Does the organization of the mental lexicon reflect the combination of abstract underlying morphemic units or the concatenation of word-level phonological units? We address these fundamental issues in Arabic, a Semitic language where every surface form is potentially analyzable into abstract morphemic units - the word pattern and the root - and where this view contrasts with stem-based approaches, chiefly driven by linguistic considerations, in which neither roots nor word patterns play independent roles in word formation and lexical representation. Five cross-modal priming experiments examine the processing of morphologically complex forms in the three major subdivisions of the Arabic lexicon - deverbal nouns, verbs, and primitive nouns. The results demonstrate that root and word pattern morphemes function as abstract cognitive entities, operating independently of semantic factors and dissociable from possible phonological confounds, while stem-based approaches consistently fail to accommodate the basic psycholinguistic properties of the Arabic mental lexicon

    Brain Network Connectivity During Language Comprehension: Interacting Linguistic and Perceptual Subsystems.

    Get PDF
    The dynamic neural processes underlying spoken language comprehension require the real-time integration of general perceptual and specialized linguistic information. We recorded combined electro- and magnetoencephalographic measurements of participants listening to spoken words varying in perceptual and linguistic complexity. Combinatorial linguistic complexity processing was consistently localized to left perisylvian cortices, whereas competition-based perceptual complexity triggered distributed activity over both hemispheres. Functional connectivity showed that linguistically complex words engaged a distributed network of oscillations in the gamma band (20-60 Hz), which only partially overlapped with the network supporting perceptual analysis. Both processes enhanced cross-talk between left temporal regions and bilateral pars orbitalis (BA47). The left-lateralized synchrony between temporal regions and pars opercularis (BA44) was specific to the linguistically complex words, suggesting a specific role of left frontotemporal cross-cortical interactions in morphosyntactic computations. Synchronizations in oscillatory dynamics reveal the transient coupling of functional networks that support specific computational processes in language comprehension.This work was supported by an EPSRC grant to W.M.-W. (EP/F030061/1), an ERC Advanced Grant (Neurolex) to W.M.-W., and by MRC Cognition and Brain Sciences Unit (CBU) funding to W.M.-W. (U.1055.04.002.00001.01). Computing resources were provided by the MRC-CBU. Funding to pay the Open Access publication charges for this article was provided by the Advanced Investigator Grant (Neurolex) to W.D.M.-W.This is the final published version which appears at http://dx.doi.org/10.1093/cercor/bhu28

    Conserved Sequence Processing in Primate Frontal Cortex.

    Get PDF
    An important aspect of animal perception and cognition is learning to recognize relationships between environmental events that predict others in time, a form of relational knowledge that can be assessed using sequence-learning paradigms. Humans are exquisitely sensitive to sequencing relationships, and their combinatorial capacities, most saliently in the domain of language, are unparalleled. Recent comparative research in human and nonhuman primates has obtained behavioral and neuroimaging evidence for evolutionarily conserved substrates involved in sequence processing. The findings carry implications for the origins of domain-general capacities underlying core language functions in humans. Here, we synthesize this research into a 'ventrodorsal gradient' model, where frontal cortex engagement along this axis depends on sequencing complexity, mapping onto the sequencing capacities of different species

    Grammatical analysis as a distributed neurobiological function.

    Get PDF
    This is the final version of the article. It first appeared from [publisher] via http://dx.doi.org/10.1002/hbm.22696Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences--inflectionally complex words and minimal phrases--and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage.Computing resources were provided by the MRC-CBU. Li Su was partly supported by the Cambridge Dementia Biomedical Research Unit

    Representation of Instantaneous and Short-Term Loudness in the Human Cortex.

    Get PDF
    Acoustic signals pass through numerous transforms in the auditory system before perceptual attributes such as loudness and pitch are derived. However, relatively little is known as to exactly when these transformations happen, and where, cortically or sub-cortically, they occur. In an effort to examine this, we investigated the latencies and locations of cortical entrainment to two transforms predicted by a model of loudness perception for time-varying sounds: the transforms were instantaneous loudness and short-term loudness, where the latter is hypothesized to be derived from the former and therefore should occur later in time. Entrainment of cortical activity was estimated from electro- and magneto-encephalographic (EMEG) activity, recorded while healthy subjects listened to continuous speech. There was entrainment to instantaneous loudness bilaterally at 45, 100, and 165 ms, in Heschl's gyrus, dorsal lateral sulcus, and Heschl's gyrus, respectively. Entrainment to short-term loudness was found in both the dorsal lateral sulcus and superior temporal sulcus at 275 ms. These results suggest that short-term loudness is derived from instantaneous loudness, and that this derivation occurs after processing in sub-cortical structures.This work was supported by an ERC Advanced Grant (230570, ‘Neurolex’) to WMW, and by MRC Cognition and Brain Sciences Unit (CBU) funding to WMW (U.1055.04.002.00001.01). Computing resources were provided by the MRC-CBU.This is the final version of the article. It first appeared from Frontiers via http://dx.doi.org/10.3389/fnins.2016.0018

    Morphological structure in the Arabic mental lexicon: Parallels between standard and dialectal Arabic.

    Get PDF
    The Arabic language is acquired by its native speakers both as a regional spoken Arabic dialect, acquired in early childhood as a first language, and as the more formal variety known as Modern Standard Arabic (MSA), typically acquired later in childhood. These varieties of Arabic show a range of linguistic similarities and differences. Since previous psycholinguistic research in Arabic has primarily used MSA, it remains to be established whether the same cognitive properties hold for the dialects. Here we focus on the morphological level, and ask whether roots and word patterns play similar or different roles in MSA and in the regional dialect known as Southern Tunisian Arabic (STA). In two intra-modal auditory-auditory priming experiments, we found similar results with strong priming effects for roots and patterns in both varieties. Despite differences in the timing and nature of the acquisition of MSA and STA, root and word pattern priming was clearly distinguishable from form-based and semantic-based priming in both varieties. The implication of these results for theories of Arabic diglossia and theories of morphological processing are discussed

    Tracking cortical entrainment in neural activity: auditory processes in human temporal cortex.

    Get PDF
    A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG) activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS) at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0) of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS toward the temporal pole.This work was supported by an EPSRC grant to William D. Marslen-Wilson and Paula Buttery (EP/F030061/1), an ERC Advanced Grant (Neurolex) to William D. Marslen-Wilson, and by MRC Cognition and Brain Sciences Unit (CBU) funding to William D. Marslen-Wilson (U.1055.04.002.00001.01). Computing resources were provided by the MRC-CBU and the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/). Andrew Liu and Phil Woodland helped with the HTK speech recogniser and Russell Thompson with the Matlab code. We thank Asaf Bachrach, Cai Wingfield, Isma Zulfiqar, Alex Woolgar, Jonathan Peelle, Li Su, Caroline Whiting, Olaf Hauk, Matt Davis, Niko Kriegeskorte, Paul Wright, Lorraine Tyler, Rhodri Cusack, Brian Moore, Brian Glasberg, Rik Henson, Howard Bowman, Hideki Kawahara, and Matti Stenroos for invaluable support and suggestions.This is the final published version. The article was originally published in Frontiers in Computational Neuroscience, 10 February 2015 | doi: 10.3389/fncom.2015.0000

    Auditory sequence processing reveals evolutionarily conserved regions of frontal cortex in macaques and humans.

    Get PDF
    An evolutionary account of human language as a neurobiological system must distinguish between human-unique neurocognitive processes supporting language and evolutionarily conserved, domain-general processes that can be traced back to our primate ancestors. Neuroimaging studies across species may determine whether candidate neural processes are supported by homologous, functionally conserved brain areas or by different neurobiological substrates. Here we use functional magnetic resonance imaging in Rhesus macaques and humans to examine the brain regions involved in processing the ordering relationships between auditory nonsense words in rule-based sequences. We find that key regions in the human ventral frontal and opercular cortex have functional counterparts in the monkey brain. These regions are also known to be associated with initial stages of human syntactic processing. This study raises the possibility that certain ventral frontal neural systems, which play a significant role in language function in modern humans, originally evolved to support domain-general abilities involved in sequence processing
    corecore