9,393 research outputs found

    On the mass distribution of neutron stars

    Full text link
    The distribution of masses for neutron stars is analyzed using the Bayesian statistical inference, evaluating the likelihood of proposed gaussian peaks by using fifty-four measured points obtained in a variety of systems. The results strongly suggest the existence of a bimodal distribution of the masses, with the first peak around 1.37M1.37 {M_{\odot}}, and a much wider second peak at 1.73M1.73 {M_{\odot}}. The results support earlier views related to the different evolutionary histories of the members for the first two peaks, which produces a natural separation (even if no attempt to "label" the systems has been made here), and argues against the single-mass scale viewpoint. The bimodal distribution can also accommodate the recent findings of M\sim M_{\odot} masses quite naturally. Finally, we explore the existence of a subgroup around 1.25M1.25 {M_{\odot}}, finding weak, if any, evidence for it. This recently claimed low-mass subgroup, possibly related to OMgNeO-Mg-Ne core collapse events, has a monotonically decreasing likelihood and does not stand out clearly from the rest of the sample.Comment: 11 pp., 3 figures, submitted to MNRAS Letter

    Efficient simulated tempering with approximated weights: Applications to first-order phase transitions

    Full text link
    Simulated tempering (ST) has attracted a great deal of attention in the last years, due to its capability to allow systems with complex dynamics to escape from regions separated by large entropic barriers. However its performance is strongly dependent on basic ingredients, such as the choice of the set of temperatures and their associated weights. Since the weight evaluations are not trivial tasks, an alternative approximated approach was proposed by Park and Pande (Phys. Rev. E {\bf 76}, 016703 (2007)) to circumvent this difficulty. Here we present a detailed study about this procedure by comparing its performance with exact (free-energy) weights and other methods, its dependence on the total replica number RR and on the temperature set. The ideas above are analyzed in four distinct lattice models presenting strong first-order phase transitions, hence constituting ideal examples in which the performance of algorithm is fundamental. In all cases, our results reveal that approximated weights work properly in the regime of larger RR's. On the other hand, for sufficiently small RR its performance is reduced and the systems do not cross properly the free-energy barriers. Finally, for estimating reliable temperature sets, we consider a simple protocol proposed at Comp. Phys. Comm. {\bf 128}, 2046 (2014).Comment: Published online in Comp. Phys. Comm. (2015

    The masses of neutron stars

    Full text link
    We present in this article an overview of the problem of neutron star masses. After a brief appraisal of the methods employed to determine the masses of neutron stars in binary systems, the existing sample of measured masses is presented, with a highlight on some very well-determined cases. We discuss the analysis made to uncover the underlying distribution and a few robust results that stand out from them. The issues related to some particular groups of neutron stars originated from different channels of stellar evolution are shown. Our conclusions are that last century's paradigm that there a single, 1.4M1.4 M_{\odot} scale is too simple. A bimodal or even more complex distribution is actually present. It is confirmed that some neutron stars have masses of 2M\sim 2 M_{\odot}, and, while there is still no firm conclusion on the maximum and minimum values produced in nature, the field has entered a mature stage in which all these and related questions can soon be given an answer.Comment: 12 pp., 3 figures, Chapter of forthcoming Handbook of Supernovae, edited by Athem W. Alsabti and Paul Murdi

    Bayesian analysis of CCDM Models

    Full text link
    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, leads to negative creation pressure, which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical tools, at light of SN Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These approaches allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/Λ\LambdaCDM model, however, neither of these, nor Γ=3αH0\Gamma=3\alpha H_0 model can be discarded from the current analysis. Three other scenarios are discarded either from poor fitting, either from excess of free parameters.Comment: 16 pages, 6 figures, 6 tables. Corrected some text and language in new versio

    The role of macroeconomic variables in sovereign risk

    Get PDF
    We use a dynamic term structure model with default and observable factors to study the interaction between macro variables and the Brazilian sovereign yield curve. We also calculate the default probabilities implied from the estimated model and the impact of macro shocks on those probabilities. Our results indicate that the VIX is the most important macro factor affecting short-term bonds and default probabilities, while the American short-term rate is the most important factor affecting the long-term default probabilities. Regarding the domestic variables, only the slope of the local yield curve presents significant explanatory power for the sovereign rates and default probabilities.

    Thermodynamic constraints on matter creation models

    Full text link
    Entropy is a fundamental concept from Thermodynamics and it can be used to study models on context of Creation Cold Dark Matter (CCDM). From conditions on the first (S˙0\dot{S}\geq0)\footnote{Throughout the present work we will use dots to indicate time derivatives and dashes to indicate derivatives with respect to scale factor.} and second order (S¨<0\ddot{S}<0) time derivatives of total entropy in the initial expansion of Sitter through the radiation and matter eras until the end of Sitter expansion, it is possible to estimate the intervals of parameters. The total entropy (StS_{t}) is calculated as sum of the entropy at all eras (SγS_{\gamma} and SmS_{m}) plus the entropy of the event horizon (ShS_h). This term derives from the Holographic Principle where it suggests that all information is contained on the observable horizon. The main feature of this method for these models are that thermodynamic equilibrium is reached in a final de Sitter era. Total entropy of the universe is calculated with three terms: apparent horizon (ShS_{h}), entropy of matter (SmS_{m}) and entropy of radiation (SγS_{\gamma}). This analysis allows to estimate intervals of parameters of CCDM models.Comment: 16 pages, 11 figures. Replaced in order to match accepted versio

    O pensamento narrativo na construçâo de signos na aprensizagem de conceitos algébraicos de alunos de 8º. ano do ensino fundamental

    Get PDF
    A linguagem, oral ou escrita, é uma das formas de expressarmos nossos pensamentos, mas como podemos realizá-lo quando o mesmo é matemático? Muitas pesquisas que envolvem as linguagens estão sendo realizado o que demonstra a atual preocupação com este tema. Com o apoio teórico de Bakhtin, Vygotsky, Bruner, Vergnaud, entre outros, vou enveredar também neste universo e verificar como a linguagem pode influenciar na construção de signos Matemáticos em alunos de 8º (oitavo) ano de uma escola municipal de Juiz de Fora (MG). A pesquisa será realizada, com os alunos, em três etapas; em assembléia, em equipe e individual
    corecore