124,645 research outputs found

    Identifying Sneutrino Dark Matter: Interplay between the LHC and Direct Search

    Full text link
    Under R-parity, the lightest supersymmetric particle (LSP) is stable and may serve as a good dark matter candidate. The R-parity can be naturally introduced with a gauge origin at TeV scale. We go over why a TeV scale B-L gauge extension of the minimal supersymmetric standard model (MSSM) is one of the most natural, if not demanded, low energy supersymmetric models. In the presence of a TeV scale Abelian gauge symmetry, the (predominantly) right-handed sneutrino LSP can be a good dark matter candidate. Its identification at the LHC is challenging because it does not carry any standard model charge. We show how we can use the correlation between the LHC experiments (dilepton resonance signals) and the direct dark matter search experiments (such as CDMS and XENON) to identify the right-handed sneutrino LSP dark matter in the B-L extended MSSM.Comment: 5 pages, 3 figure

    Efficient fetal-maternal ECG signal separation from two channel maternal abdominal ECG via diffusion-based channel selection

    Full text link
    There is a need for affordable, widely deployable maternal-fetal ECG monitors to improve maternal and fetal health during pregnancy and delivery. Based on the diffusion-based channel selection, here we present the mathematical formalism and clinical validation of an algorithm capable of accurate separation of maternal and fetal ECG from a two channel signal acquired over maternal abdomen

    Multiconfiguration time-dependent Hartree impurity solver for nonequilibrium dynamical mean-field theory

    Get PDF
    Nonequilibrium dynamical mean-field theory (DMFT) solves correlated lattice models by obtaining their local correlation functions from an effective model consisting of a single impurity in a self-consistently determined bath. The recently developed mapping of this impurity problem from the Keldysh time contour onto a time-dependent single-impurity Anderson model (SIAM) [C. Gramsch et al., Phys. Rev. B 88, 235106 (2013)] allows one to use wave function-based methods in the context of nonequilibrium DMFT. Within this mapping, long times in the DMFT simulation become accessible by an increasing number of bath orbitals, which requires efficient representations of the time-dependent SIAM wave function. These can be achieved by the multiconfiguration time-dependent Hartree (MCTDH) method and its multi-layer extensions. We find that MCTDH outperforms exact diagonalization for large baths in which the latter approach is still within reach and allows for the calculation of SIAMs beyond the system size accessible by exact diagonalization. Moreover, we illustrate the computation of the self-consistent two-time impurity Green's function within the MCTDH second quantization representation.Comment: 12 pages, 8 figure

    A Context-aware Attention Network for Interactive Question Answering

    Full text link
    Neural network based sequence-to-sequence models in an encoder-decoder framework have been successfully applied to solve Question Answering (QA) problems, predicting answers from statements and questions. However, almost all previous models have failed to consider detailed context information and unknown states under which systems do not have enough information to answer given questions. These scenarios with incomplete or ambiguous information are very common in the setting of Interactive Question Answering (IQA). To address this challenge, we develop a novel model, employing context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. We also generate unique IQA datasets to test our model, which will be made publicly available. Employing these attention mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input depending on different contexts. When available, user's feedback is encoded and directly applied to update sentence-level attention to infer an answer. Extensive experiments on QA and IQA datasets quantitatively demonstrate the effectiveness of our model with significant improvement over state-of-the-art conventional QA models.Comment: 9 page
    corecore