1,321 research outputs found
Cross-cultural emotional prosody recognition: Evidence from Chinese and British listeners
This cross-cultural study of emotional tone of voice recognition tests the in-group advantage hypothesis (Elfenbein & Ambady, 2002) employing a quasi-balanced design. Individuals of Chinese and British background were asked to recognise pseudosentences produced by Chinese and British native speakers, displaying one of seven emotions (anger, disgust, fear, happy, neutral tone of voice, sad, and surprise). Findings reveal that emotional displays were recognised at rates higher than predicted by chance; however, members of each cultural group were more accurate in recognising the displays communicated by a member of their own cultural group than a member of the other cultural group. Moreover, the evaluation of error matrices indicates that both culture groups relied on similar mechanism when recognising emotional displays from the voice. Overall, the study reveals evidence for both universal and culture-specific principles in vocal emotion recognition. © 2013 © 2013 Taylor & Francis
ERP correlates of motivating voices: quality of motivation and time-course matters
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms
You 'Have' to Hear This: Using Tone of Voice to Motivate Others.
The present studies explored the role of prosody in motivating others, and applied self-determination theory (Ryan & Deci, 2000) to do so. Initial studies describe patterns of prosody that discriminate motivational speech. Autonomy support was expressed with lower intensity, slower speech rate and less voice energy in both motivationally laden and neutral (but motivationally primed) sentences. In a follow-up study, participants were able to recognize motivational prosody in semantically neutral sentences, suggesting prosody alone may carry motivational content. Findings from subsequent studies also showed that an autonomy-supportive as compared with a controlling tone facilitated positive personal (perceived choice and lower perceived pressure, well-being) and interpersonal (closeness to others and prosocial behaviors) outcomes commonly linked to this type of motivation. Results inform both the social psychology (in particular motivation) and psycho-linguistic (in particular prosody) literatures and offer a first description of how motivational tone alone can shape listeners' experiences. (PsycINFO Database Recor
Neurophysiological markers of phrasal verb processing: evidence from L1 and L2 speakers
Bilingual Figurative Language Processing is a timely book that provides a much-needed bilingual perspective to the broad field of figurative language. This is the first book of its kind to address how bilinguals acquire, store, and process figurative language, such as idiomatic expressions (e.g., kick the bucket), metaphors (e.g., lawyers are sharks), and irony, and how these tropes might interact in real time across the bilingual's two languages. This volume offers the reader and the bilingual student an overview of the major strands of research, both theoretical and empirical, currently being undertaken in this field of inquiry. At the same time, Bilingual Figurative Language Processing provides readers and undergraduate and graduate students with the opportunity to acquire hands-on experience in the development of psycholinguistic experiments in bilingual figurative language. Each chapter includes a section on suggested student research projects. Selected chapters provide detailed procedures on how to design and develop psycholinguistic experiments
Early and late brain signatures of emotional prosody among individuals with high versus low power
AbstractUsing ERPs, we explored the relationship between social power and emotional prosody processing. In particular, we investigated differences at early and late processing stages between individuals primed with high or low power. Comparable to previously published findings from nonprimed participants, individuals primed with low power displayed differentially modulated P2 amplitudes in response to different emotional prosodies, whereas participants primed with high power failed to do so. Similarly, participants primed with low power showed differentially modulated amplitudes in response to different emotional prosodies at a later processing stage (late ERP component), whereas participants primed with high power did not. These ERP results suggest that high versus low power leads to emotional prosody processing differences at the early stage associated with emotional salience detection and at a later stage associated with more in‐depth processing of emotional stimuli.</jats:p
dráma 4 felvonásban - írta Eugéne Brieux - fordította Zigány Árpád - rendező Kemény Lajos
Városi Szinház. Debreczen, 1913 február 15 -én szombaton: K. Hegyesy Mari és Beregi Oszkár a budapesti nemzeti szinház művészeinek együttes felléptével.Debreceni Egyetem Egyetemi és Nemzeti Könyvtá
Who's in Control? Proficiency and L1 Influence on L2 Processing
Abstract
We report three reaction time (RT)/event-related brain potential (ERP) semantic priming lexical decision experiments that explore the following in relation to L1 activation during L2 processing: (1) the role of L2 proficiency, (2) the role of sentence context, and (3) the locus of L1 activations (ortho-graphic vs. semantic). All experiments used German (L1) homonyms translated into English (L2) to form prime-target pairs (pine-jaw for Kiefer) to test whether the L1 caused interference in an all-L2 experiment. Both RTs and ERPs were measured on targets. Experiment 1 revealed reversed priming in the N200 component and RTs for low-proficiency learners, but only RT interference for high-proficiency participants. Experiment 2 showed that once the words were processed in sentence context, the low-proficiency participants still showed reversed N200 and RT priming, whereas the high-proficiency group showed no effects. Experiment 3 tested native English speakers with the words in sentence context and showed a null result comparable to the high-proficiency group. Based on these results, we argue that cognitive control relating to translational activation is modulated by (1) L2 proficiency, as the early interference in the N200 was observed only for low-proficiency learners, and (2) sentence context, as it helps high-proficiency learners control L1 activation. As reversed priming was observed in the N200 and not the N400 component, we argue that (3) the locus of the L1 activations was orthographic. Implications in terms of bilingual word recognition and the functional role of the N200 ERP component are discussed.</jats:p
Perceived Comfort and Blinding Efficacy in Randomised Sham-Controlled Transcranial Direct Current Stimulation (tDCS) Trials at 2 mA in Young and Older Healthy Adults
Background tDCS studies typically find that: lowest levels of comfort occur at stimulation-onset; young adult participants experience less comfort than older participants; and participants? blinding seems effective at low current strengths. At 2 mA conflicting results have been reported, questioning the effectiveness of blinding in sham-controlled paradigms using higher current strengths. Investigator blinding is rarely reported. Objective Using a protocol with 30 min of 2 mA stimulation we sought to: (a) investigate the level of perceived comfort in young and older adults, ranging in age from 19 to 29 years and 63 to 76 years, respectively; (b) test investigator and participant blinding; (c) assess comfort over a longer stimulation duration; (d) add to the literature on protocols using 2 mA current strength. Methods A two-session experiment was conducted where sham and active stimulation were administered to the frontal cortex at the F8/FP1 sites in a within-subjects manner. Levels of perceived comfort were measured, using a visual analogue scale, at the start and end of stimulation in young and older adults. Post-stimulation, participants and investigators judged whether or not active stimulation was used. Results Comfort scores were lower at stimulation onset in both age groups. Older adults reported: (i) more comfort than young participants overall; (ii) comparable levels of comfort in sham and active stimulation; (iii) significantly more comfort than the young participants during active stimulation. Stimulation mode was correctly identified above chance in the second of the two sessions; 65% of all participants correctly identified the stimulation mode, resulting in a statistical trend. Similarly, the experimenter correctly identified stimulation mode significantly above chance, with 62% of all investigator judgements correct across 120 judgements. Conclusions Using 2 mA current strength over 30 minutes, tDCS stimulation comfort is lower at stimulation onset in young and older adults and, overall, lower for young participants. Investigators and participants may be able to identify active stimulation at above chance levels, although accuracy never exceeded 65% for either participants or the experimenter. Further research into blinding efficacy is recommended
The Neurocognition of Prosody
Prosody is one of the most undervalued components of language, despite its fulfillment of manifold purposes. It can, for instance, help assign the correct meaning to compounds such as “white house” (linguistic function), or help a listener understand how a speaker feels (emotional function). However, brain-based models that take into account the role prosody plays in dynamic speech comprehension are still rare. This is probably due to the fact that it has proven difficult to fully denote the neurocognitive architecture underlying prosody. This review discusses clinical and neuroscientific evidence regarding both linguistic and emotional prosody. It will become obvious that prosody processing is a multistage operation and that its temporally and functionally distinct processing steps are anchored in a functionally differentiated brain network
Dynamic Facial Expressions Prime the Processing of Emotional Prosody
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency
- …
