2,476 research outputs found

    Selective Encoding for Abstractive Sentence Summarization

    Full text link
    We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. It consists of a sentence encoder, a selective gate network, and an attention equipped decoder. The sentence encoder and decoder are built with recurrent neural networks. The selective gate network constructs a second level sentence representation by controlling the information flow from encoder to decoder. The second level representation is tailored for sentence summarization task, which leads to better performance. We evaluate our model on the English Gigaword, DUC 2004 and MSR abstractive sentence summarization datasets. The experimental results show that the proposed selective encoding model outperforms the state-of-the-art baseline models.Comment: 10 pages; To appear in ACL 201

    Tunable Intrinsic Plasmons due to Band Inversion in Topological Materials

    Get PDF
    The band inversion has led to rich physical effects in both topological insulators and topological semimetals. It has been found that the inverted band structure with the Mexican-hat dispersion could enhance the interband correlation leading to a strong intrinsic plasmon excitation. Its frequency ranges from several meV\mathrm{meV} to tens of meV\mathrm{meV} and can be effectively tuned by the external fields. The electron-hole asymmetric term splits the peak of the plasmon excitation into double peaks. The fate and properties of this plasmon excitation can also act as a probe to characterize the topological phases even in the lightly doped systems. We numerically demonstrate the impact of the band inversion on plasmon excitations in magnetically doped thin films of three-dimensional strong topological insulators, V- or Cr-doped (Bi, Sb)2_2Te3_3, which support the quantum anomalous Hall states. Our work thus sheds some new light on the potential applications of topological materials in plasmonics.Comment: 6 pages, 5 figures, Accepted in PR

    Cold-water corals and hydrochemistry - is there a unifying link?

    Get PDF
    Physical and chemical parameters were measured in five different regions of the Northeast Atlantic with known occurrences of cold-water coral reefs and mounds and in the Mediterranean, where these corals form living carpets over existing morphologies. In this study we analyzed 282 bottom water samples regarding delta13CDIC, delta18O, and DIC. The hydrochemical data reveal characteristic patterns and differences for cold-water coral sites with living coral communities and ongoing reef and mound growth at the Irish and Norwegian sites. While the localities in the Mediterranean, in the Gulf of Cadiz, and off Mauritania show only patchy coral growth on mound-like reliefs and various substrates. The analysis of delta13C/delta18O reveals distinct clusters for the different regions and the respective bottom water masses bathing the delta18O, and especially between delta13CDIC and DIC shows that DIC is a parameter with high sensitivity to the mixing of bottom water masses. It varies distinctively between sites with living reefs/mounds and sites with restricted patchy growth or dead corals. Results suggest that DIC and delta13CDIC can provide additional insights into the mixing of bottom water masses. Prolific cold-water coral growth forming giant biogenic structures plot into a narrow geochemical window characterized by a variation of delta13CDIC between 0.45 and 0.79 per mille being associated with the water mass having a density of sigma-theta of 27.5+-0.15 kg m-3

    Faithful to the Original: Fact Aware Neural Abstractive Summarization

    Full text link
    Unlike extractive summarization, abstractive summarization has to fuse different parts of the source text, which inclines to create fake facts. Our preliminary study reveals nearly 30% of the outputs from a state-of-the-art neural summarization system suffer from this problem. While previous abstractive summarization approaches usually focus on the improvement of informativeness, we argue that faithfulness is also a vital prerequisite for a practical abstractive summarization system. To avoid generating fake facts in a summary, we leverage open information extraction and dependency parse technologies to extract actual fact descriptions from the source text. The dual-attention sequence-to-sequence framework is then proposed to force the generation conditioned on both the source text and the extracted fact descriptions. Experiments on the Gigaword benchmark dataset demonstrate that our model can greatly reduce fake summaries by 80%. Notably, the fact descriptions also bring significant improvement on informativeness since they often condense the meaning of the source text.Comment: 8 pages, 3 figures, AAAI 201
    corecore