951 research outputs found

    Constraints on small-scale cosmological perturbations from gamma-ray searches for dark matter

    Full text link
    Events like inflation or phase transitions can produce large density perturbations on very small scales in the early Universe. Probes of small scales are therefore useful for e.g. discriminating between inflationary models. Until recently, the only such constraint came from non-observation of primordial black holes (PBHs), associated with the largest perturbations. Moderate-amplitude perturbations can collapse shortly after matter-radiation equality to form ultracompact minihalos (UCMHs) of dark matter, in far greater abundance than PBHs. If dark matter self-annihilates, UCMHs become excellent targets for indirect detection. Here we discuss the gamma-ray fluxes expected from UCMHs, the prospects of observing them with gamma-ray telescopes, and limits upon the primordial power spectrum derived from their non-observation by the Fermi Large Area Space Telescope.Comment: 4 pages, 3 figures. To appear in J Phys Conf Series (Proceedings of TAUP 2011, Munich

    VAPI: Vectorization of Algorithm for Performance Improvement

    Full text link
    This study presents the vectorization of metaheuristic algorithms as the first stage of vectorized optimization implementation. Vectorization is a technique for converting an algorithm, which operates on a single value at a time to one that operates on a collection of values at a time to execute rapidly. The vectorization technique also operates by replacing multiple iterations into a single operation, which improves the algorithm's performance in speed and makes the algorithm simpler and easier to be implemented. It is important to optimize the algorithm by implementing the vectorization technique, which improves the program's performance, which requires less time and can run long-running test functions faster, also execute test functions that cannot be implemented in non-vectorized algorithms and reduces iterations and time complexity. Converting to vectorization to operate several values at once and enhance algorithms' speed and efficiency is a solution for long running times and complicated algorithms. The objective of this study is to use the vectorization technique on one of the metaheuristic algorithms and compare the results of the vectorized algorithm with the algorithm which is non-vectorized.Comment: 21 page

    SUSY Breaking and Moduli Stabilization from Fluxes in Gauged 6D Supergravity

    Get PDF
    We construct the 4D N=1 supergravity which describes the low-energy limit of 6D supergravity compactified on a sphere with a monopole background a la Salam and Sezgin. This provides a simple setting sharing the main properties of realistic string compactifications such as flat 4D spacetime, chiral fermions and N=1 supersymmetry as well as Fayet-Iliopoulos terms induced by the Green-Schwarz mechanism. The matter content of the resulting theory is a supersymmetric SO(3)xU(1) gauge model with two chiral multiplets, S and T. The expectation value of T is fixed by the classical potential, and S describes a flat direction to all orders in perturbation theory. We consider possible perturbative corrections to the Kahler potential in inverse powers of ReSRe S and ReTRe T, and find that under certain circumstances, and when taken together with low-energy gaugino condensation, these can lift the degeneracy of the flat direction for ReSRe S. The resulting vacuum breaks supersymmetry at moderately low energies in comparison with the compactification scale, with positive cosmological constant. It is argued that the 6D model might itself be obtained from string compactifications, giving rise to realistic string compactifications on non Ricci flat manifolds. Possible phenomenological and cosmological applications are briefly discussed.Comment: 32 pages, 2 figures. Uses JHEP3.cls. References fixed and updated, some minor typos fixed. Corrected minor error concerning Kaluza-Klein scales. Results remain unchange

    The role of context fusion on accuracy, beyond-accuracy, and fairness of point-of-interest recommendation systems

    Get PDF
    Point-of-interest (POI) recommendation is an essential service to location-based social networks (LBSNs), benefiting both users providing them the chance to explore new locations and businesses by discovering new potential customers. These systems learn the preferences of users and their mobility patterns to generate relevant POI recommendations. Previous studies have shown that incorporating contextual information such as geographical, temporal, social, and categorical substantially improves the quality of POI recommendations. However, fewer works have studied in-depth the multi-aspect benefits of context fusion on POI recommendation, in particular on beyond-accuracy, fairness, and interpretability of recommendations. In this work, we propose a linear regression-based fusion of POI contexts that effectively finds the best combination of contexts for each (i) user, or (ii) group of users from their historical interactions. The results of large-scale experiments on two popular datasets Gowalla and Yelp reveal several interesting findings. First, the proposed approach does not present significant loss in accuracy and unfairness of popularity bias as with classical collaborative baselines, and yet improves the beyond-accuracy of recommendation compared with existing context-aware (CA) approaches using heuristic context fusions; for instance, the proposed approach improves the accuracy and beyond-accuracy compare to best baseline model by 25% and 30%, respectively. Second, our proposed approach is interpretable, allowing to explain to the user why she has been recommended specific POIs, based on the learned context weights from user past check-ins; for example, if you are in Rome and our method recommends you a historical place like 'Colosseum', it can also provide an explanation why this item is recommended to you based on your personal preference on context (e.g., you were recommended to visit 'Colosseum' because in the past your visited historical places). Third, by analyzing the fairness of recommendation with respect to users (based on their activity levels) and items (based on the popularity of items), we found that a model which is recommend fairly on one dataset can recommend unfair on another dataset.Overall, our study suggests that appropriate context fusion is an essential element of an accurate, fair, and transparent POI recommendation system. We highlight that while we have tested the efficacy of our context fusion methods on two popular CA recommendation models in the POI domain, namely GeoSoCa and LORE, our system can be flexibly utilized to extend the capability of other CA algorithms

    The star formation history of BCGs to z = 1.8 from the SpARCS/SWIRE survey : evidence for significant in situ star formation at high redshift

    Get PDF
    We present the results of an MIPS-24 μm study of the brightest cluster galaxies (BCGs) of 535 high-redshift galaxy clusters. The clusters are drawn from the Spitzer Adaptation of the Red-Sequence Cluster Survey, which effectively provides a sample selected on total stellar mass, over 0.2 12) increases rapidly with redshift. Above z ∼ 1, an average of ∼20% of the sample have 24 μm inferred infrared luminosities of LIR > 1012 Lo, while the fraction below z ∼ 1 exhibiting such luminosities is <1%. The Spitzer-IRAC colors indicate the bulk of the 24 μm detected population is predominantly powered by star formation, with only 7/125 galaxies lying within the color region inhabited by active galactic nuclei (AGNs). Simple arguments limit the star formation activity to several hundred million years and this may therefore be indicative of the timescale for AGN feedback to halt the star formation. Below redshift z ∼ 1, there is not enough star formation to significantly contribute to the overall stellar mass of the BCG population, and therefore BCG growth is likely dominated by dry mergers. Above z ∼ 1, however, the inferred star formation would double the stellar mass of the BCGs and is comparable to the mass assembly predicted by simulations through dry mergers. We cannot yet constrain the process driving the star formation for the overall sample, though a single object studied in detail is consistent with a gas-rich merger.Peer reviewe

    A Profile Likelihood Analysis of the Constrained MSSM with Genetic Algorithms

    Full text link
    The Constrained Minimal Supersymmetric Standard Model (CMSSM) is one of the simplest and most widely-studied supersymmetric extensions to the standard model of particle physics. Nevertheless, current data do not sufficiently constrain the model parameters in a way completely independent of priors, statistical measures and scanning techniques. We present a new technique for scanning supersymmetric parameter spaces, optimised for frequentist profile likelihood analyses and based on Genetic Algorithms. We apply this technique to the CMSSM, taking into account existing collider and cosmological data in our global fit. We compare our method to the MultiNest algorithm, an efficient Bayesian technique, paying particular attention to the best-fit points and implications for particle masses at the LHC and dark matter searches. Our global best-fit point lies in the focus point region. We find many high-likelihood points in both the stau co-annihilation and focus point regions, including a previously neglected section of the co-annihilation region at large m_0. We show that there are many high-likelihood points in the CMSSM parameter space commonly missed by existing scanning techniques, especially at high masses. This has a significant influence on the derived confidence regions for parameters and observables, and can dramatically change the entire statistical inference of such scans.Comment: 47 pages, 8 figures; Fig. 8, Table 7 and more discussions added to Sec. 3.4.2 in response to referee's comments; accepted for publication in JHE
    corecore