31,018 research outputs found

    Boundary regularity for the Poisson equation in reifenberg-flat domains

    Full text link
    This paper is devoted to the investigation of the boundary regularity for the Poisson equation {{cc} -\Delta u = f & \text{in} \Omega u= 0 & \text{on} \partial \Omega where ff belongs to some Lp(Ω)L^p(\Omega) and Ω\Omega is a Reifenberg-flat domain of Rn.\mathbb R^n. More precisely, we prove that given an exponent α(0,1)\alpha\in (0,1), there exists an ε>0\varepsilon>0 such that the solution uu to the previous system is locally H\"older continuous provided that Ω\Omega is (ε,r0)(\varepsilon,r_0)-Reifenberg-flat. The proof is based on Alt-Caffarelli-Friedman's monotonicity formula and Morrey-Campanato theorem

    A Generalization of Connes-Kreimer Hopf Algebra

    Full text link
    ``Bonsai'' Hopf algebras, introduced here, are generalizations of Connes-Kreimer Hopf algebras, which are motivated by Feynman diagrams and renormalization. We show that we can find operad structure on the set of bonsais. We introduce a new differential on these bonsai Hopf algebras, which is inspired by the tree differential. The cohomologies of these are computed here, and the relationship of this differential with the appending operation * of Connes-Kreimer Hopf algebras is investigated

    Multimodal Speech Emotion Recognition Using Audio and Text

    Full text link
    Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.Comment: 7 pages, Accepted as a conference paper at IEEE SLT 201
    corecore