282 research outputs found
Modular Verification for a Class of PLTL Properties
The verification of dynamic properties of a reactive systems by model-checking leads to a potential combinatorial explosion of the state space that has to be checked. In order to deal with this problem, we define a strategy based on local verifications rather than on a global verification. The idea is to split the system into subsystems called modules, and to verify the properties on each module in separation. We prove for a class of PLTL properties that if a property is satisfied on each module, then it is globally satisfied. We call such properties modular properties. We propose a modular decomposition based on the B refinement process. We present in this paper an usual class of dynamic properties in the shape of G (p -> Q), where `p' is a proposition and `Q' is a simple temporal formula, such as `X q', `F q', or `q U r' (with `q' and `r' being propositions). We prove that these dynamic properties are modular. For these specific patterns, we have exhibited some syntactic conditions of modularity on their corresponding Buchi automata. These conditions define a larger class which contains other patterns such as `G (p -> X (q U r))'. Finally, we show through the example of an industrial Robot that this method is valid in a practical way
Vocal Expression of Affective States in Spontaneous Laughter reveals the Bright and the Dark Side of Laughter
Data availability: Data have been made publicly available at Figshare and can be accessed at https://doi.org/10.17633/rd.brunel. 15028296.Copyright © The Author(s) 2022. It has been shown that the acoustical signal of posed laughter can convey afective information to the listener. However, because posed and spontaneous laughter difer in a number of signifcant aspects, it is unclear whether afective communication generalises to spontaneous laughter. To answer this question, we created a stimulus set of 381 spontaneous laughter audio recordings, produced by 51 diferent speakers, resembling diferent types of laughter. In Experiment 1, 159 participants were
presented with these audio recordings without any further information about the situational context of the speakers and asked to classify the laughter sounds. Results showed that joyful, tickling, and schadenfreude laughter could be classifed signifcantly above chance level. In Experiment 2, 209
participants were presented with a subset of 121 laughter recordings correctly classifed in Experiment 1 and asked to rate the laughter according to four emotional dimensions, i.e., arousal, dominance, sender’s valence, and receiver-directed valence. Results showed that laughter types difered
signifcantly in their ratings on all dimensions. Joyful laughter and tickling laughter both showed a positive sender’s valence and receiver-directed valence, whereby tickling laughter had a particularly high arousal. Schadenfreude had a negative receiver-directed valence and a high dominance, thus providing empirical evidence for the existence of a dark side in spontaneous laughter. The present results suggest that with the evolution of human social communication laughter diversifed from the former play signal of non-human primates to a much more fne-grained signal that can serve a multitude of social functions in order to regulate group structure and hierarchy.German Research Foundation (SZ 267/1-1; DP Szameitat).https://doi.org/10.17633/rd.brunel. 1502829
An agent-based model of the response to angioplasty and bare-metal stent deployment in an atherosclerotic blood vessel
Purpose: While animal models are widely used to investigate the development of restenosis in blood vessels following an intervention, computational models offer another means for investigating this phenomenon. A computational model of the response of a treated vessel would allow investigators to assess the effects of altering certain vessel- and stent-related variables. The authors aimed to develop a novel computational model of restenosis development following an angioplasty and bare-metal stent implantation in an atherosclerotic vessel using agent-based modeling techniques. The presented model is intended to demonstrate the body's response to the intervention and to explore how different vessel geometries or stent arrangements may affect restenosis development. Methods: The model was created on a two-dimensional grid space. It utilizes the post-procedural vessel lumen diameter and stent information as its input parameters. The simulation starting point of the model is an atherosclerotic vessel after an angioplasty and stent implantation procedure. The model subsequently generates the final lumen diameter, percent change in lumen cross-sectional area, time to lumen diameter stabilization, and local concentrations of inflammatory cytokines upon simulation completion. Simulation results were directly compared with the results from serial imaging studies and cytokine levels studies in atherosclerotic patients from the relevant literature. Results: The final lumen diameter results were all within one standard deviation of the mean lumen diameters reported in the comparison studies. The overlapping-stent simulations yielded results that matched published trends. The cytokine levels remained within the range of physiological levels throughout the simulations. Conclusion: We developed a novel computational model that successfully simulated the development of restenosis in a blood vessel following an angioplasty and bare-metal stent deployment based on the characteristics of the vessel crosssection and stent. A further development of this model could ultimately be used as a predictive tool to depict patient outcomes and inform treatment options. © 2014 Curtin, Zhou
Results of a pilot study on the involvement of bilateral inferior frontal gyri in emotional prosody perception: an rTMS study
<p>Abstract</p> <p>Background</p> <p>The right hemisphere may play an important role in paralinguistic features such as the emotional melody in speech. The extent of this involvement however is unclear. Imaging studies have shown involvement of both left and right inferior frontal gyri in emotional prosody perception. The present pilot study examined whether these brain areas are critically involved in the processing of emotional prosody and of semantics in 9 healthy subjects. Repetitive transcranial magnetic stimulation was used with a coil centred over left and right inferior frontal gyri, as localized by neuronavigation based on the subject's MRI. A sham condition was included. An online-TMS approach was applied; an emotional language task was completed during stimulation. This computerized task consisted of sentences pronounced by actors. In the semantics condition an emotion (fear, anger or neutral) was expressed in the content pronounced with a neutral intonation. In the prosody condition the emotion was expressed in the intonation, while the content was neutral.</p> <p>Results</p> <p>Reaction times on the emotional prosody task condition were significantly longer after rTMS over both the right and the left inferior frontal gyrus as compared to sham stimulation and after controlling for learning effects associated with order of condition. When taking all emotions together, there was no difference in effect on reaction times between the right and left stimulation. For the emotion Fear, reaction times were significantly longer after stimulating the left inferior frontal gyrus as compared to the right inferior frontal gyrus. Reaction times in the semantics task condition were not significantly different between the three TMS conditions.</p> <p>Conclusions</p> <p>The data indicate a critical involvement of both the right and the left inferior frontal gyrus in emotional prosody perception. The findings of this pilot study need replication. Future studies should include more subjects and examine whether the left and right inferior frontal gyrus play a differential role and complement each other, e.g. in the integrated processing of linguistic and prosodic aspects of speech, respectively.</p
Time Course of the Involvement of the Right Anterior Superior Temporal Gyrus and the Right Fronto-Parietal Operculum in Emotional Prosody Perception
In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400–1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence
Processing of inconsistent emotional information: an fMRI study
Previous studies investigating the anterior cingulate cortex (ACC) have relied on a number of tasks which involved cognitive control and attentional demands. In this fMRI study, we tested the model that ACC functions as an attentional network in the processing of language. We employed a paradigm that requires the processing of concurrent linguistic information predicting that the cognitive costs imposed by competing trials would engender the activation of ACC. Subjects were confronted with sentences where the semantic content conflicted with the prosodic intonation (CONF condition) randomly interspaced with sentences which conveyed coherent discourse components (NOCONF condition). We observed the activation of the rostral ACC and the middle frontal gyrus when the NOCONF condition was subtracted from the CONF condition. Our findings provide evidence for the involvement of the rostral ACC in the processing of complex competing linguistic stimuli, supporting theories that claim its relevance as a part of the cortical attentional circuit. The processing of emotional prosody involved a bilateral network encompassing the superior and medial temporal cortices. This evidence confirms previous research investigating the neuronal network that supports the processing of emotional information
Authenticity affects the recognition of emotions in speech: behavioral and fMRI evidence
Effects of cue modality and emotional category on recognition of nonverbal emotional signals in schizophrenia
Social cognitive deficits and their neural correlates in progressive supranuclear palsy
Although progressive supranuclear palsy is defined by its akinetic rigidity, vertical supranuclear gaze palsy and falls, cognitive impairments are an important determinant of patients’ and carers’ quality of life. Here, we investigate whether there is a broad deficit of modality-independent social cognition in progressive supranuclear palsy and explore the neural correlates for these. We recruited 23 patients with progressive supranuclear palsy (using clinical diagnostic criteria, nine with subsequent pathological confirmation) and 22 age- and education-matched controls. Participants performed an auditory (voice) emotion recognition test, and a visual and auditory theory of mind test. Twenty-two patients and 20 controls underwent structural magnetic resonance imaging to analyse neural correlates of social cognition deficits using voxel-based morphometry. Patients were impaired on the voice emotion recognition and theory of mind tests but not auditory and visual control conditions. Grey matter atrophy in patients correlated with both voice emotion recognition and theory of mind deficits in the right inferior frontal gyrus, a region associated with prosodic auditory emotion recognition. Theory of mind deficits also correlated with atrophy of the anterior rostral medial frontal cortex, a region associated with theory of mind in health. We conclude that patients with progressive supranuclear palsy have a multimodal deficit in social cognition. This deficit is due, in part, to progressive atrophy in a network of frontal cortical regions linked to the integration of socially relevant stimuli and interpretation of their social meaning. This impairment of social cognition is important to consider for those managing and caring for patients with progressive supranuclear palsy
Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia
The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages
- …
