1,537 research outputs found

    Neutral hydrogen in galaxy halos at the peak of the cosmic star formation history

    Get PDF
    We use high-resolution cosmological zoom-in simulations from the FIRE project to make predictions for the covering fractions of neutral hydrogen around galaxies at z=2-4. These simulations resolve the interstellar medium of galaxies and explicitly implement a comprehensive set of stellar feedback mechanisms. Our simulation sample consists of 16 main halos covering the mass range M_h~10^9-6x10^12 Msun at z=2, including 12 halos in the mass range M_h~10^11-10^12 Msun corresponding to Lyman break galaxies (LBGs). We process our simulations with a ray tracing method to compute the ionization state of the gas. Galactic winds increase the HI covering fractions in galaxy halos by direct ejection of cool gas from galaxies and through interactions with gas inflowing from the intergalactic medium. Our simulations predict HI covering fractions for Lyman limit systems (LLSs) consistent with measurements around z~2-2.5 LBGs; these covering fractions are a factor ~2 higher than our previous calculations without galactic winds. The fractions of HI absorbers arising in inflows and in outflows are on average ~50% but exhibit significant time variability, ranging from ~10% to ~90%. For our most massive halos, we find a factor ~3 deficit in the LLS covering fraction relative to what is measured around quasars at z~2, suggesting that the presence of a quasar may affect the properties of halo gas on ~100 kpc scales. The predicted covering fractions, which decrease with time, peak at M_h~10^11-10^12 Msun, near the peak of the star formation efficiency in dark matter halos. In our simulations, star formation and galactic outflows are highly time dependent; HI covering fractions are also time variable but less so because they represent averages over large areas.Comment: 20 pages, including 11 figures. MNRAS, in pres

    Estimation non paramétrique des quantiles de crue par la méthode des noyaux

    Get PDF
    La détermination du débit de crue d'une période de retour donnée nécessite l'estimation de la distribution des crues annuelles. L'utilisation des distributions non paramétriques - comme alternative aux lois statistiques - est examinée dans cet ouvrage. Le principal défi dans l'estimation par la méthode des noyaux réside dans le calcul du paramètre qui détermine le degré de lissage de la densité non paramétrique. Nous avons comparé plusieurs méthodes et avons retenu la méthode plug-in et la méthode des moindres carrés avec validation croisée comme les plus prometteuses.Plusieurs conclusions intéressantes ont été tirées de cette étude. Entre autres, pour l'estimation des quantiles de crue, il semble préférable de considérer des estimateurs basés directement sur la fonction de distribution plutôt que sur la fonction de densité. Une comparaison de la méthode plug-in à l'ajustement de trois lois statistiques a permis de conclure que la méthode des noyaux représente une alternative intéressante aux méthodes paramétriques traditionnelles.Traditional flood frequency analysis involves the fitting of a statistical distribution to observed annual peak flows. The choice of statistical distribution is crucial, since it can have significant impact on design flow estimates. Unfortunately, it is often difficult to determine in an objective way which distribution is the most appropriate.To avoid the inherent arbitrariness associated with the choice of distribution in parametric frequency analysis, one can employ a method based on nonparametric density estimation. Although potentially subject to larger standard error of quantile estimates, the use of nonparametric densities eliminates the need for selecting a particular distribution and the potential bias associated with a wrong choice.The kernel method is a conceptually simple approach, similar in nature to a smoothed histogram. The critical parameter in kernel estimation is the smoothing parameter that determines the degree of smoothing. Methods for estimating the smoothing parameter have already been compared in a number of statistical papers. The novelty of our work is the particular emphasis on quantile estimation, in particular the estimation of quantiles outside the range of observed data. The flood estimation problem is unique in this sense and has been the motivating factor for this study.Seven methods for estimating the smoothing parameter are compared in the paper. All methods are based on some goodness-of-fit measures. More specifically, we considered the least-squares cross-validation method, the maximum likelihood cross-validation method, Adamowski's (1985) method, a plug-in method developed by Altman and Leger (1995) and modified by the authors (Faucher et al., 2001), Breiman's goodness-of-fit criterion method (Breiman, 1977), the variable-kernel maximum likelihood method, and the variable-kernel least-squares cross-validation method.The estimation methods can be classified according to whether they are based on fixed or variable kernels, and whether they are based on the goodness-of-fit of the density function or cumulative distribution function.The quality of the different estimation methods was explored in a Monte Carlo study. Hundred (100) samples of sizes 10, 20, 50, and 100 were simulated from an LP3 distribution. The nonparametric estimation methods were then applied to each of the simulated samples, and quantiles with return period 10, 20, 50, 100, 200, and 1000 were estimated. Bias and root-mean square error of quantile estimates were the key figures used to compare methods. The results of the study can be summarized as follows :1. Comparison of kernels. The literature reports that the kernel choice is relatively unimportant compared to the choice of the smoothing parameter. To determine whether this assertion also holds in the case of the estimation of large quantiles outside the range of data, we compared six different kernel candidates. We found no major differences between the biweight, the Normal, the Epanechnikov, and the EV1 kernels. However, the rectangular and the Cauchy kernel should be avoided.2. Comparison of sample size. The quality of estimates, whether parametric or nonparametric, deteriorates as sample size decreases. To examine the degree of sensitivity to sample size, we compared estimates of the 200-year event obtained by assuming a GEV distribution and a nonparametric density estimated by maximum likelihood cross-validation. The main conclusion is that the root mean square error for the parametric model (GEV) is more sensitive to sample size than the nonparametric model. 3. Comparison of estimators of the smoothing parameter. Among the methods considered in the study, the plug-in method, developed by Altman and Leger (1995) and modified by the authors (Faucher et al. 2001), turned out to perform the best along with the least-squares cross-validation method which had a similar performance. Adamowski's method had to be excluded, because it consistently failed to converge. The methods based on variable kernels generally did not perform as well as the fixed kernel methods.4. Comparison of density-based and cumulative distribution-based methods. The only cumulative distribution-based method considered in the comparison study was the plug-in method. Adamowski's method is also based on the cumulative distribution function, but was rejected for the reasons mentioned above. Although the plug-in method did well in the comparison, it is not clear whether this can be attributed to the fact that it is based on estimation of the cumulative distribution function. However, one could hypothesize that when the objective is to estimate quantiles, a method that emphasizes the cumulative distribution function rather than the density should have certain advantages. 5. Comparison of parametric and nonparametric methods. Nonparametric methods were compared with conventional parametric methods. The LP3, the 2-parameter lognormal, and the GEV distributions were used to fit the simulated samples. It was found that nonparametric methods perform quite similarly to the parametric methods. This is a significant result, because data were generated from an LP3 distribution so one would intuitively expect the LP3 model to be superior which however was not the case. In actual applications, flood distributions are often irregular and in such cases nonparametric methods would likely be superior to parametric methods

    Cosmic ray feedback in the FIRE simulations: constraining cosmic ray propagation with GeV gamma ray emission

    Get PDF
    We present the implementation and the first results of cosmic ray (CR) feedback in the Feedback In Realistic Environments (FIRE) simulations. We investigate CR feedback in non-cosmological simulations of dwarf, sub-LL\star starburst, and LL\star galaxies with different propagation models, including advection, isotropic and anisotropic diffusion, and streaming along field lines with different transport coefficients. We simulate CR diffusion and streaming simultaneously in galaxies with high resolution, using a two moment method. We forward-model and compare to observations of γ\gamma-ray emission from nearby and starburst galaxies. We reproduce the γ\gamma-ray observations of dwarf and LL\star galaxies with constant isotropic diffusion coefficient κ3×1029cm2s1\kappa \sim 3\times 10^{29}\,{\rm cm^{2}\,s^{-1}}. Advection-only and streaming-only models produce order-of-magnitude too large γ\gamma-ray luminosities in dwarf and LL\star galaxies. We show that in models that match the γ\gamma-ray observations, most CRs escape low-gas-density galaxies (e.g.\ dwarfs) before significant collisional losses, while starburst galaxies are CR proton calorimeters. While adiabatic losses can be significant, they occur only after CRs escape galaxies, so they are only of secondary importance for γ\gamma-ray emissivities. Models where CRs are ``trapped'' in the star-forming disk have lower star formation efficiency, but these models are ruled out by γ\gamma-ray observations. For models with constant κ\kappa that match the γ\gamma-ray observations, CRs form extended halos with scale heights of several kpc to several tens of kpc.Comment: 31 pages, 26 figures, accepted for publication in MNRA

    Fermi Gamma-ray Haze via Dark Matter and Millisecond Pulsars

    Full text link
    We study possible astrophysical and dark matter (DM) explanations for the Fermi gamma-ray haze in the Milky Way halo. As representatives of various DM models, we consider DM particles annihilating into W+W-, b-bbar, and e+e-. In the first two cases, the prompt gamma-ray emission from DM annihilations is significant or even dominant at E > 10 GeV, while inverse Compton scattering (ICS) from annihilating DM products is insignificant. For the e+e- annihilation mode, we require a boost factor of order 100 to get significant contribution to the gamma-ray haze from ICS photons. Possible astrophysical sources of high energy particles at high latitudes include type Ia supernovae (SNe) and millisecond pulsars (MSPs). Based on our current understanding of Ia SNe rates, they do not contribute significantly to gamma-ray flux in the halo of the Milky Way. As the MSP population in the stellar halo of the Milky Way is not well constrained, MSPs may be a viable source of gamma-rays at high latitudes provided that there are ~ 20 000 - 60 000 of MSPs in the Milky Way stellar halo. In this case, pulsed gamma-ray emission from MSPs can contribute to gamma-rays around few GeV's while the ICS photons from MSP electrons and positrons may be significant at all energies in the gamma-ray haze. The plausibility of such a population of MSPs is discussed. Consistency with the Wilkinson Microwave Anisotropy Probe (WMAP) microwave haze requires that either a significant fraction of MSP spin-down energy is converted into e+e- flux or the DM annihilates predominantly into leptons with a boost factor of order 100.Comment: 18 pages, 1 table, 5 figures; v2: references and a few discussions added, v3: minor change

    Field-free two-direction alignment alternation of linear molecules by elliptic laser pulses

    Full text link
    We show that a linear molecule subjected to a short specific elliptically polarized laser field yields postpulse revivals exhibiting alignment alternatively located along the orthogonal axis and the major axis of the ellipse. The effect is experimentally demonstrated by measuring the optical Kerr effect along two different axes. The conditions ensuring an optimal field-free alternation of high alignments along both directions are derived.Comment: 5 pages, 4 color figure

    Dwarf Galaxy Mass Estimators vs. Cosmological Simulations

    Get PDF
    We use a suite of high-resolution cosmological dwarf galaxy simulations to test the accuracy of commonly-used mass estimators from Walker et al.(2009) and Wolf et al.(2010), both of which depend on the observed line-of-sight velocity dispersion and the 2D half-light radius of the galaxy, ReRe. The simulations are part of the the Feedback in Realistic Environments (FIRE) project and include twelve systems with stellar masses spanning 105107M10^{5} - 10^{7} M_{\odot} that have structural and kinematic properties similar to those of observed dispersion-supported dwarfs. Both estimators are found to be quite accurate: MWolf/Mtrue=0.980.12+0.19M_{Wolf}/M_{true} = 0.98^{+0.19}_{-0.12} and MWalker/Mtrue=1.070.15+0.21M_{Walker}/M_{true} =1.07^{+0.21}_{-0.15}, with errors reflecting the 68% range over all simulations. The excellent performance of these estimators is remarkable given that they each assume spherical symmetry, a supposition that is broken in our simulated galaxies. Though our dwarfs have negligible rotation support, their 3D stellar distributions are flattened, with short-to-long axis ratios c/a0.40.7 c/a \simeq 0.4-0.7. The accuracy of the estimators shows no trend with asphericity. Our simulated galaxies have sphericalized stellar profiles in 3D that follow a nearly universal form, one that transitions from a core at small radius to a steep fall-off r4.2\propto r^{-4.2} at large rr, they are well fit by S\'ersic profiles in projection. We find that the most important empirical quantity affecting mass estimator accuracy is ReRe . Determining ReRe by an analytic fit to the surface density profile produces a better estimated mass than if the half-light radius is determined via direct summation.Comment: Submitted to MNRAS. 11 pages, 12 figures, comments welcom

    Stability of Relativistic Matter with Magnetic Fields for Nuclear Charges up to the Critical Value

    Get PDF
    We give a proof of stability of relativistic matter with magnetic fields all the way up to the critical value of the nuclear charge Zα=2/πZ\alpha=2/\pi.Comment: LaTeX2e, 12 page

    What Can Information Encapsulation Tell Us About Emotional Rationality?

    Get PDF
    What can features of cognitive architecture, e.g. the information encapsulation of certain emotion processing systems, tell us about emotional rationality? de Sousa proposes the following hypothesis: “the role of emotions is to supply the insufficiency of reason by imitating the encapsulation of perceptual modes” (de Sousa 1987: 195). Very roughly, emotion processing can sometimes occur in a way that is insensitive to what an agent already knows, and such processing can assist reasoning by restricting the response-options she considers. This paper aims to provide an exposition and assessment of de Sousa’s hypothesis. I argue information encapsulation is not essential to emotion-driven reasoning, as emotions can determine the relevance of response-options even without being encapsulated. However, I argue encapsulation can still play a role in assisting reasoning by restricting response-options more efficiently, and in a way that ensures which options emotions deem relevant are not overridden by what the agent knows. I end by briefly explaining why this very feature also helps explain how emotions can, on occasion, hinder reasoning

    Strongly Time-Variable Ultra-Violet Metal Line Emission from the Circum-Galactic Medium of High-Redshift Galaxies

    Get PDF
    We use cosmological simulations from the Feedback In Realistic Environments (FIRE) project, which implement a comprehensive set of stellar feedback processes, to study ultra-violet (UV) metal line emission from the circum-galactic medium of high-redshift (z=2-4) galaxies. Our simulations cover the halo mass range Mh ~ 2x10^11 - 8.5x10^12 Msun at z=2, representative of Lyman break galaxies. Of the transitions we analyze, the low-ionization C III (977 A) and Si III (1207 A) emission lines are the most luminous, with C IV (1548 A) and Si IV (1394 A) also showing interesting spatially-extended structures. The more massive halos are on average more UV-luminous. The UV metal line emission from galactic halos in our simulations arises primarily from collisionally ionized gas and is strongly time variable, with peak-to-trough variations of up to ~2 dex. The peaks of UV metal line luminosity correspond closely to massive and energetic mass outflow events, which follow bursts of star formation and inject sufficient energy into galactic halos to power the metal line emission. The strong time variability implies that even some relatively low-mass halos may be detectable. Conversely, flux-limited samples will be biased toward halos whose central galaxy has recently experienced a strong burst of star formation. Spatially-extended UV metal line emission around high-redshift galaxies should be detectable by current and upcoming integral field spectrographs such as the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope and Keck Cosmic Web Imager (KCWI).Comment: 16 pages, 8 figures, accepted for publication in MNRA
    corecore