186,501 research outputs found

    X-ray outbursts of low-mass X-ray binary transients observed in the RXTE era

    Full text link
    We have performed a statistical study of the properties of 110 bright X-ray outbursts in 36 low-mass X-ray binary transients (LMXBTs) seen with the All-Sky Monitor (2--12 keV) on board the {\it Rossi X-ray Timing Explorer} ({\it RXTE}) in 1996--2011. We have measured a number of outburst properties, including peak X-ray luminosity, rate of change of luminosity on a daily timescale, ee-folding rise and decay timescales, outburst duration, and total radiated energy. We found that the average properties such as peak X-ray luminosity, rise and decay timescales, outburst duration, and total radiated energy of black hole LMXBTs, are at least two times larger than those of neutron star LMXBTs, implying that the measurements of these properties may provide preliminary clues as to the nature of the compact object of a newly discovered LMXBT. We also found that the outburst peak X-ray luminosity is correlated with the rate of change of X-ray luminosity in both the rise and the decay phases, which is consistent with our previous studies. Positive correlations between total radiated energy and peak X-ray luminosity, and between total radiated energy and the ee-folding rise or decay timescale, are also found in the outbursts. These correlations suggest that the mass stored in the disk before an outburst is the primary initial condition that sets up the outburst properties seen later. We also found that the outbursts of two transient stellar-mass ULXs in M31 also roughly follow the correlations, which indicate that the same outburst mechanism works for the brighter outbursts of these two sources in M31 that reached the Eddington luminosity.Comment: Accepted to Ap

    Cross-domain Semantic Parsing via Paraphrasing

    Full text link
    Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pre-trained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular Overnight dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embeddings can bring significant improvement.Comment: 12 pages, 2 figures, accepted by EMNLP201
    corecore