1,847 research outputs found

    Decaying Dark Matter in Supersymmetric Model and Cosmic-Ray Observations

    Full text link
    We study cosmic-rays in decaying dark matter scenario, assuming that the dark matter is the lightest superparticle and it decays through a R-parity violating operator. We calculate the fluxes of cosmic-rays from the decay of the dark matter and those from the standard astrophysical phenomena in the same propagation model using the GALPROP package. We reevaluate the preferred parameters characterizing standard astrophysical cosmic-ray sources with taking account of the effects of dark matter decay. We show that, if energetic leptons are produced by the decay of the dark matter, the fluxes of cosmic-ray positron and electron can be in good agreements with both PAMELA and Fermi-LAT data in wide parameter region. It is also discussed that, in the case where sizable number of hadrons are also produced by the decay of the dark matter, the mass of the dark matter is constrained to be less than 200-300 GeV in order to avoid the overproduction of anti-proton. We also show that the cosmic gamma-ray flux can be consistent with the results of Fermi-LAT observation if the mass of the dark matter is smaller than nearly 4 TeV.Comment: 24 pages, 5 figure

    A formally verified compiler back-end

    Get PDF
    This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well

    Cosmic Ray Anomalies from the MSSM?

    Get PDF
    The recent positron excess in cosmic rays (CR) observed by the PAMELA satellite may be a signal for dark matter (DM) annihilation. When these measurements are combined with those from FERMI on the total (e++ee^++e^-) flux and from PAMELA itself on the pˉ/p\bar p/p ratio, these and other results are difficult to reconcile with traditional models of DM, including the conventional mSUGRA version of Supersymmetry even if boosts as large as 103410^{3-4} are allowed. In this paper, we combine the results of a previously obtained scan over a more general 19-parameter subspace of the MSSM with a corresponding scan over astrophysical parameters that describe the propagation of CR. We then ascertain whether or not a good fit to this CR data can be obtained with relatively small boost factors while simultaneously satisfying the additional constraints arising from gamma ray data. We find that a specific subclass of MSSM models where the LSP is mostly pure bino and annihilates almost exclusively into τ\tau pairs comes very close to satisfying these requirements. The lightest τ~\tilde \tau in this set of models is found to be relatively close in mass to the LSP and is in some cases the nLSP. These models lead to a significant improvement in the overall fit to the data by an amount Δχ21/\Delta \chi^2 \sim 1/dof in comparison to the best fit without Supersymmetry while employing boosts 100\sim 100. The implications of these models for future experiments are discussed.Comment: 57 pages, 31 figures, references adde

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Algorithmic iteration for computational intelligence

    Get PDF
    Machine awareness is a disputed research topic, in some circles considered a crucial step in realising Artificial General Intelligence. Understanding what that is, under which conditions such feature could arise and how it can be controlled is still a matter of speculation. A more concrete object of theoretical analysis is algorithmic iteration for computational intelligence, intended as the theoretical and practical ability of algorithms to design other algorithms for actions aimed at solving well-specified tasks. We know this ability is already shown by current AIs, and understanding its limits is an essential step in qualifying claims about machine awareness and Super-AI. We propose a formal translation of algorithmic iteration in a fragment of modal logic, formulate principles of transparency and faithfulness across human and machine intelligence, and consider the relevance to theoretical research on (Super)-AI as well as the practical import of our results

    An empirical investigation of the influence of collaboration in Finance on article impact

    Get PDF
    We investigate the impact of collaborative research in academic Finance literature to find out whether and to what extent collaboration leads to higher impact articles (6,667 articles across 2001-2007 extracted from the Web of Science). Using the top 5 % as ranked by the 4-year citation counts following publication, we also follow related secondary research questions such as the relationships between article impact and author impact; collaboration and average author impact of an article; and, the nature of geographic collaboration. Key findings indicate: collaboration does lead to articles of higher impact but there is no significant marginal value for collaboration beyond three authors; high impact articles are not monopolized by high impact authors; collaboration and the average author impact of high-impact articles are positively associated, where collaborative articles have a higher mean author impact in comparison to single-author articles; and collaboration among the authors of high impact articles is mostly cross-institutional

    Tracking of dietary intakes in early childhood : the Melbourne InFANT program

    Full text link
    Background/Objectives: The objectives of the present study were to describe food and nutrient intakes in children aged 9 and 18 months, and to assess tracking of intakes between these two ages.Subjects/Methods: Participants were 177 children of first-time mothers from the control arm of the Melbourne Infant Feeding Activity and Nutrition Trial (InFANT) Program. Dietary intake was collected at 9 and 18 months using three 24&thinsp;h diet recalls. Tracking was assessed for food and nutrient intakes using logistic regression analysis and estimating partial correlation coefficients, respectively.Results: Although overall nutrient intakes estimated in this study did not indicate a particular risk of nutrient deficiency, our findings suggest that consumption of energy-dense, nutrient-poor foods occurred as early as 9 months of age, with some of these foods tracking highly over the weaning period. Intakes of healthier foods such as fruits, vegetables, dairy products, eggs, fish and water were also relatively stable over this transition from infancy to toddlerhood, along with moderate tracking for riboflavin, iodine, fibre, calcium and iron. Tracking was low but close to &rho;=0.3 for zinc, magnesium and potassium intakes.Conclusions: The tracking of energy-dense, nutrient-poor foods has important implications for public health, given the development of early eating behaviours is likely to be modifiable. At this stage of life, dietary intakes are largely influenced by the foods parents provide, parental feeding practices and modelling. This study supports the importance of promoting healthy dietary trajectories from infancy.<br /

    Effects of interspecific gene flow on the phenotypic variance–covariance matrix in Lake Victoria Cichlids

    Get PDF
    Quantitative genetics theory predicts adaptive evolution to be constrained along evolutionary lines of least resistance. In theory, hybridization and subsequent interspecific gene flow may, however, rapidly change the evolutionary constraints of a population and eventually change its evolutionary potential, but empirical evidence is still scarce. Using closely related species pairs of Lake Victoria cichlids sampled from four different islands with different levels of interspecific gene flow, we tested for potential effects of introgressive hybridization on phenotypic evolution in wild populations. We found that these effects differed among our study species. Constraints measured as the eccentricity of phenotypic variance–covariance matrices declined significantly with increasing gene flow in the less abundant species for matrices that have a diverged line of least resistance. In contrast, we find no such decline for the more abundant species. Overall our results suggest that hybridization can change the underlying phenotypic variance–covariance matrix, potentially increasing the adaptive potential of such populations

    Measurement of the top quark mass using the matrix element technique in dilepton final states

    Get PDF
    We present a measurement of the top quark mass in pp¯ collisions at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron collider. The data were collected by the D0 experiment corresponding to an integrated luminosity of 9.7  fb−1. The matrix element technique is applied to tt¯ events in the final state containing leptons (electrons or muons) with high transverse momenta and at least two jets. The calibration of the jet energy scale determined in the lepton+jets final state of tt¯ decays is applied to jet energies. This correction provides a substantial reduction in systematic uncertainties. We obtain a top quark mass of mt=173.93±1.84  GeV
    corecore