268 research outputs found
Attention wins over sensory attenuation in a sound detection task
'Sensory attenuation', i.e., reduced neural responses to self-induced compared to externally generated stimuli, is a well-established phenomenon. However, very few studies directly compared sensory attenuation with attention effect, which leads to increased neural responses. In this study, we brought sensory attenuation and attention together in a behavioural auditory detection task, where both effects were quantitatively measured and compared. The classic auditory attention effect of facilitating detection performance was replicated. When attention and sensory attenuation were both present, attentional facilitation decreased but remained significant. The results are discussed in the light of current theories of sensory attenuation
Audio-visual speech perception: a developmental ERP investigation
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development
Convergent and divergent fMRI responses in children and adults to increasing language production demands
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity
In vivo functional and myeloarchitectonic mapping of human primary auditory areas
In contrast to vision, where retinotopic mapping alone can define areal borders, primary auditory areas such as A1 are best delineated by combining in vivo tonotopic mapping with postmortem cyto- or myeloarchitectonics from the same individual. We combined high-resolution (800 μm) quantitative T(1) mapping with phase-encoded tonotopic methods to map primary auditory areas (A1 and R) within the "auditory core" of human volunteers. We first quantitatively characterize the highly myelinated auditory core in terms of shape, area, cortical depth profile, and position, with our data showing considerable correspondence to postmortem myeloarchitectonic studies, both in cross-participant averages and in individuals. The core region contains two "mirror-image" tonotopic maps oriented along the same axis as observed in macaque and owl monkey. We suggest that these two maps within the core are the human analogs of primate auditory areas A1 and R. The core occupies a much smaller portion of tonotopically organized cortex on the superior temporal plane and gyrus than is generally supposed. The multimodal approach to defining the auditory core will facilitate investigations of structure-function relationships, comparative neuroanatomical studies, and promises new biomarkers for diagnosis and clinical studies
Extensive Tonotopic Mapping across Auditory Cortex Is recapitulated by spectrally directed attention and systematically related to Cortical Myeloarchitecture
Auditory selective attention is vital in natural soundscapes. But, it is unclear how attentional focus on the primary dimension of auditory representation - acoustic frequency - might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically-estimated auditory core, and across the majority of tonotopically-mapped non-primary auditory cortex. The attentionally-driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically-mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization
A bilingual advantage in controlling language interference during sentence comprehension
This study compared the comprehension of syntactically simple with more complex sentences in Italian–English adult bilinguals and monolingual controls in the presence or absence of sentence-level interference. The task was to identify the agent of the sentence and we primarily examined the accuracy of response. The target sentence was signalled by the gender of the speaker, either a male or a female, and this varied over trials, where the target was spoken in a male voice the distractor was spoken in a female voice and vice versa. In contrast to other work showing a bilingual disadvantage in sentence comprehension under conditions of noise, we show that in this task, where voice permits selection of the target, adult bilingual speakers are in fact better able than their monolingual Italian peers to resist sentence-level interference when comprehension demands are high. Within bilingual speakers we also found that degree of proficiency in English correlated with the ability to resist interference for complex sentences both when the target and distractor were in Italian and when the target was in English and the distractor in Italian
Non-invasive laminar inference with MEG: comparison of methods and source inversion algorithms
Magnetoencephalography (MEG) is a direct measure of neuronal current flow; its anatomical resolution is therefore not constrained by physiology but rather by data quality and the models used to explain these data. Recent simulation work has shown that it is possible to distinguish between signals arising in the deep and superficial cortical laminae given accurate knowledge of these surfaces with respect to the MEG sensors. This previous work has focused around a single inversion scheme (multiple sparse priors) and a single global parametric fit metric (free energy). In this paper we use several different source inversion algorithms and both local and global, as well as parametric and non-parametric fit metrics in order to demonstrate the robustness of the discrimination between layers. We find that only algorithms with some sparsity constraint can successfully be used to make laminar discrimination. Importantly, local t-statistics, global cross-validation and free energy all provide robust and mutually corroborating metrics of fit. We show that discrimination accuracy is affected by patch size estimates, cortical surface features, and lead field strength, which suggests several possible future improvements to this technique. This study demonstrates the possibility of determining the laminar origin of MEG sensor activity, and thus directly testing theories of human cognition that involve laminar- and frequency-specific mechanisms. This possibility can now be achieved using recent developments in high precision MEG, most notably the use of subject-specific head-casts, which allow for significant increases in data quality and therefore anatomically precise MEG recordings
What underlies the emergence of stimulus- and domain-specific neural responses? Commentary on Hernandez, Claussenius-Kalman, Ronderos, Castilla-Earls, Sun, Weiss, & Young (2018)
Hernandez et al (2018) provide a welcome historical perspective and synthesis of emergentist theories over the last decades, particularly in their focus on theoretical differences. Here we discuss a number of neuroimaging findings on the character and drivers of seemingly domain-selective neural response preferences, and how these might bear on the predictiveness of different emergentist accounts
Auditory sequence processing reveals evolutionarily conserved regions of frontal cortex in macaques and humans.
An evolutionary account of human language as a neurobiological system must distinguish between human-unique neurocognitive processes supporting language and evolutionarily conserved, domain-general processes that can be traced back to our primate ancestors. Neuroimaging studies across species may determine whether candidate neural processes are supported by homologous, functionally conserved brain areas or by different neurobiological substrates. Here we use functional magnetic resonance imaging in Rhesus macaques and humans to examine the brain regions involved in processing the ordering relationships between auditory nonsense words in rule-based sequences. We find that key regions in the human ventral frontal and opercular cortex have functional counterparts in the monkey brain. These regions are also known to be associated with initial stages of human syntactic processing. This study raises the possibility that certain ventral frontal neural systems, which play a significant role in language function in modern humans, originally evolved to support domain-general abilities involved in sequence processing
Mapping the human cortical surface by combining quantitative T(1) with retinotopy
We combined quantitative relaxation rate (R1= 1/T1) mapping-to measure local myelination-with fMRI-based retinotopy. Gray-white and pial surfaces were reconstructed and used to sample R1 at different cortical depths. Like myelination, R1 decreased from deeper to superficial layers. R1 decreased passing from V1 and MT, to immediately surrounding areas, then to the angular gyrus. High R1 was correlated across the cortex with convex local curvature so the data was first "de-curved". By overlaying R1 and retinotopic maps, we found that many visual area borders were associated with significant R1 increases including V1, V3A, MT, V6, V6A, V8/VO1, FST, and VIP. Surprisingly, retinotopic MT occupied only the posterior portion of an oval-shaped lateral occipital R1 maximum. R1 maps were reproducible within individuals and comparable between subjects without intensity normalization, enabling multi-center studies of development, aging, and disease progression, and structure/function mapping in other modalities
- …
