62 research outputs found
Neutrinos and Primordial Nucleosynthesis
The importance of the Big Bang Nucleosynthesis (BBN) as a unique tool for
studying neutrino properties is discussed, and the recent steps towards a
self-consistent and robust handling of the weak reaction decoupling from the
thermal bath as well as of the neutrino reheating following the e+e-
annihilation are summarized. We also emphasize the important role of the Cosmic
Microwave Background (CMB) anisotropy in providing an accurate and independent
determination of the baryon density parameter omegab. The BBN is presently a
powerful parameter-free theory that can test the standard scenario of the
neutrino decoupling in the early Universe. Moreover it can constrain new
physics in the neutrino sector. The perspectives for improvements in the next
years are outlined.Comment: Talk given by G. Mangano at NOW2004, Conca Specchiulla, Otranto
Italy, september 2004. To appear in the Proceedings of the Worksho
Neutrino decay as a possible interpretation to the MiniBooNE observation with unparticle scenario
In a new measurement on neutrino oscillation , the
MiniBooNE Collaboration observes an excess of electron-like events at low
energy and the phenomenon may demand an explanation which obviously is beyond
the oscillation picuture. We propose that heavier neutrino decaying
into a lighter one via the transition process
where denotes any light products, could be a natural mechanism. The
theoretical model we employ here is the unparticle scenario established by
Georgi. We have studied two particular modes \nu_\mu\to \nu_e+\Un and
. Unfortunately, the number coming out from
the computation is too small to explain the observation. Moreover, our results
are consistent with the cosmology constraint on the neutrino lifetime and the
theoretical estimation made by other groups, therefore we can conclude that
even though neutrino decay seems plausible in this case, it indeed cannot be
the source of the peak at lower energy observed by the MiniBooNE collaboration
and there should be other mechanisms responsible for the phenomenon.Comment: 14 pages, conclusions are changed; published version for EPJ
PArthENoPE: Public Algorithm Evaluating the Nucleosynthesis of Primordial Elements
We describe a program for computing the abundances of light elements produced
during Big Bang Nucleosynthesis which is publicly available at
http://parthenope.na.infn.it/. Starting from nuclear statistical equilibrium
conditions the program solves the set of coupled ordinary differential
equations, follows the departure from chemical equilibrium of nuclear species,
and determines their asymptotic abundances as function of several input
cosmological parameters as the baryon density, the number of effective
neutrino, the value of cosmological constant and the neutrino chemical
potential. The program requires commercial NAG library routines.Comment: 18 pages, 2 figures. Version accepted by Comp. Phys. Com. The code
(and an updated manual) is publicly available at
http://parthenope.na.infn.it
Constraining the cosmic radiation density due to lepton number with Big Bang Nucleosynthesis
The cosmic energy density in the form of radiation before and during Big Bang
Nucleosynthesis (BBN) is typically parameterized in terms of the effective
number of neutrinos N_eff. This quantity, in case of no extra degrees of
freedom, depends upon the chemical potential and the temperature characterizing
the three active neutrino distributions, as well as by their possible
non-thermal features. In the present analysis we determine the upper bounds
that BBN places on N_eff from primordial neutrino--antineutrino asymmetries,
with a careful treatment of the dynamics of neutrino oscillations. We consider
quite a wide range for the total lepton number in the neutrino sector, eta_nu=
eta_{nu_e}+eta_{nu_mu}+eta_{nu_tau} and the initial electron neutrino asymmetry
eta_{nu_e}^in, solving the corresponding kinetic equations which rule the
dynamics of neutrino (antineutrino) distributions in phase space due to
collisions, pair processes and flavor oscillations. New bounds on both the
total lepton number in the neutrino sector and the nu_e -bar{nu}_e asymmetry at
the onset of BBN are obtained fully exploiting the time evolution of neutrino
distributions, as well as the most recent determinations of primordial 2H/H
density ratio and 4He mass fraction. Note that taking the baryon fraction as
measured by WMAP, the 2H/H abundance plays a relevant role in constraining the
allowed regions in the eta_nu -eta_{nu_e}^in plane. These bounds fix the
maximum contribution of neutrinos with primordial asymmetries to N_eff as a
function of the mixing parameter theta_13, and point out the upper bound N_eff
< 3.4. Comparing these results with the forthcoming measurement of N_eff by the
Planck satellite will likely provide insight on the nature of the radiation
content of the universe.Comment: 17 pages, 9 figures, version to be published in JCA
Dynamical Dark Energy model parameters with or without massive neutrinos
We use WMAP5 and other cosmological data to constrain model parameters in
quintessence cosmologies, focusing also on their shift when we allow for
non-vanishing neutrino masses. The Ratra-Peebles (RP) and SUGRA potentials are
used here, as examples of slowly or fastly varying state parameter w(a). Both
potentials depend on an energy scale \Lambda. Here we confirm the results of
previous analysis with WMAP3 data on the upper limits on \Lambda, which turn
out to be rather small (down to ~10^{-9} in RP cosmologies and ~10^{-5} for
SUGRA). Our constraints on \Lambda are not heavily affected by the inclusion of
neutrino mass as a free parameter. On the contrary, when the neutrino mass
degree of freedom is opened, significant shifts in the best-fit values of other
parameters occur.Comment: 9 pages, 3 figures, submitted to JCA
Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo
Monte Carlo techniques have been used to evaluate the statistical and
systematic uncertainties in the helium abundances derived from extragalactic
H~II regions. The helium abundance is sensitive to several physical parameters
associated with the H~II region. In this work, we introduce Markov Chain Monte
Carlo (MCMC) methods to efficiently explore the parameter space and determine
the helium abundance, the physical parameters, and the uncertainties derived
from observations of metal poor nebulae. Experiments with synthetic data show
that the MCMC method is superior to previous implementations (based on flux
perturbation) in that it is not affected by biases due to non-physical
parameter space. The MCMC analysis allows a detailed exploration of
degeneracies, and, in particular, a false minimum that occurs at large values
of optical depth in the He~I emission lines. We demonstrate that introducing
the electron temperature derived from the [O~III] emission lines as a prior, in
a very conservative manner, produces negligible bias and effectively eliminates
the false minima occurring at large optical depth. We perform a frequentist
analysis on data from several "high quality" systems. Likelihood plots
illustrate degeneracies, asymmetries, and limits of the determination. In
agreement with previous work, we find relatively large systematic errors,
limiting the precision of the primordial helium abundance for currently
available spectra.Comment: 25 pages, 11 figure
Creation of the CMB spectrum: precise analytic solutions for the blackbody photosphere
The blackbody spectrum of CMB was created in the blackbody photosphere at
redshifts z>2x10^6. At these early times, the Universe was dense and hot enough
that complete thermal equilibrium between baryonic matter (electrons and ions)
and photons could be established. Any perturbation away from the blackbody
spectrum was suppressed exponentially. New physics, for example annihilation
and decay of dark matter, can add energy and photons to CMB at redshifts z>10^5
and result in a Bose-Einstein spectrum with a non-zero chemical potential
(). Precise evolution of the CMB spectrum around the critical redshift of
z~2x10^6 is required in order to calculate the -type spectral distortion
and constrain the underlying new physics. Although numerical calculation of
important processes involved (double Compton process, comptonization and
bremsstrahlung) is not difficult, analytic solutions are much faster and easier
to calculate and provide valuable physical insights. We provide precise (better
than 1%) analytic solutions for the decay of , created at an earlier
epoch, including all three processes, double Compton, Compton scattering on
thermal electrons and bremsstrahlung in the limit of small distortions. This is
a significant improvement over the existing solutions with accuracy ~10% or
worse. We also give a census of important sources of energy injection into CMB
in standard cosmology. In particular, calculations of distortions from
electron-positron annihilation and primordial nucleosynthesis illustrate in a
dramatic way the strength of the equilibrium restoring processes in the early
Universe. Finally, we point out the triple degeneracy in standard cosmology,
i.e., the and distortions from adiabatic cooling of baryons and
electrons, Silk damping and annihilation of thermally produced WIMP dark matter
are of similar order of magnitude (~ 10^{-8}-10^{-10})
Sterile neutrinos with eV masses in cosmology -- how disfavoured exactly?
We study cosmological models that contain sterile neutrinos with eV-range
masses as suggested by reactor and short-baseline oscillation data. We confront
these models with both precision cosmological data (probing the CMB decoupling
epoch) and light-element abundances (probing the BBN epoch). In the minimal
LambdaCDM model, such sterile neutrinos are strongly disfavoured by current
data because they contribute too much hot dark matter. However, if the
cosmological framework is extended to include also additional relativistic
degrees of freedom -- beyond the three standard neutrinos and the putative
sterile neutrinos, then the hot dark matter constraint on the sterile states is
considerably relaxed. A further improvement is achieved by allowing a dark
energy equation of state parameter w<-1. While BBN strongly disfavours extra
radiation beyond the assumed eV-mass sterile neutrino, this constraint can be
circumvented by a small nu_e degeneracy. Any model containing eV-mass sterile
neutrinos implies also strong modifications of other cosmological parameters.
Notably, the inferred cold dark matter density can shift up by 20 to 75%
relative to the standard LambdaCDM value.Comment: 14 pages, 6 figures, v2: minor changes, matches version accepted for
publication in JCA
Galactic-Centre Gamma Rays in CMSSM Dark Matter Scenarios
We study the production of gamma rays via LSP annihilations in the core of
the Galaxy as a possible experimental signature of the constrained minimal
supersymmetric extension of the Standard Model (CMSSM), in which
supersymmetry-breaking parameters are assumed to be universal at the GUT scale,
assuming also that the LSP is the lightest neutralino chi. The part of the
CMSSM parameter space that is compatible with the measured astrophysical
density of cold dark matter is known to include a stau_1 - chi coannihilation
strip, a focus-point strip where chi has an enhanced Higgsino component, and a
funnel at large tanb where the annihilation rate is enhanced by the poles of
nearby heavy MSSM Higgs bosons, A/H. We calculate the total annihilation rates,
the fractions of annihilations into different Standard Model final states and
the resulting fluxes of gamma rays for CMSSM scenarios along these strips. We
observe that typical annihilation rates are much smaller in the coannihilation
strip for tanb = 10 than along the focus-point strip or for tanb = 55, and that
the annihilation branching ratios differ greatly between the different dark
matter strips. Whereas the current Fermi-LAT data are not sensitive to any of
the CMSSM scenarios studied, and the calculated gamma-ray fluxes are probably
unobservably low along the coannihilation strip for tanb = 10, we find that
substantial portions of the focus-point strips and rapid-annihilation funnel
regions could be pressured by several more years of Fermi-LAT data, if
understanding of the astrophysical background and/or systematic uncertainties
can be improved in parallel.Comment: 33 pages, 12 figures, comments and references added, version to
appear in JCA
- …
