1,626 research outputs found

    Molecular alignment and filamentation: comparison between weak and strong field models

    Full text link
    The impact of nonadiabatic laser-induced molecular alignment on filamentation is numerically studied. Weak and strong field model of impulsive molecular alignment are compared in the context of nonlinear pulse propagation. It is shown that the widely used weak field model describing the refractive index modification induced by impulsive molecular alignment accurately reproduces the propagation dynamics providing that only a single pulse is involved during the experiment. On the contrary, it fails at reproducing the nonlinear propagation experienced by an intense laser pulse traveling in the wake of a second strong laser pulse. The discrepancy depends on the relative delay between the two pulses and is maximal for delays corresponding to half the rotational period of the molecule

    Feedback-regulated star formation in molecular clouds and galactic discs

    Full text link
    We present a two-zone theory for feedback-regulated star formation in galactic discs, consistently connecting the galaxy-averaged star formation law with star formation proceeding in giant molecular clouds (GMCs). Our focus is on galaxies with gas surface density Sigma_g>~100 Msun pc^-2. In our theory, the galactic disc consists of Toomre-mass GMCs embedded in a volume-filling ISM. Radiation pressure on dust disperses GMCs and most supernovae explode in the volume-filling medium. A galaxy-averaged star formation law is derived by balancing the momentum input from supernova feedback with the gravitational weight of the disc gas. This star formation law is in good agreement with observations for a CO conversion factor depending continuously on Sigma_g. We argue that the galaxy-averaged star formation efficiency per free fall time, epsilon_ff^gal, is only a weak function of the efficiency with which GMCs convert their gas into stars. This is possible because the rate limiting step for star formation is the rate at which GMCs form: for large efficiency of star formation in GMCs, the Toomre Q parameter obtains a value slightly above unity so that the GMC formation rate is consistent with the galaxy-averaged star formation law. We contrast our results with other theories of turbulence-regulated star formation and discuss predictions of our model. Using a compilation of data from the literature, we show that the galaxy-averaged star formation efficiency per free fall time is non-universal and increases with increasing gas fraction, as predicted by our model. We also predict that the fraction of the disc gas mass in bound GMCs decreases for increasing values of the GMC star formation efficiency. This is qualitatively consistent with the smooth molecular gas distribution inferred in local ultra-luminous infrared galaxies and the small mass fraction in giant clumps in high-redshift galaxies.Comment: 23 pages, 10 figures. To appear in MNRA

    Revisiting interferences for measuring and optimizing optical nonlinearities

    Full text link
    A method based on optical interferences for measuring optical nonlinearities is presented. In a proof-of-principle experiment, the technique is applied to the experimental determination of the intensity dependence of the photoionization process. It is shown that it can also be used to control and optimize the nonlinear process itself at constant input energy. The presented strategy leads to enhancements that can reach several orders of magnitude for highly nonlinear processes.Comment: 6 pages, 5 figure

    Resonantly enhanced filamentation in gases

    Full text link
    In this Letter, a low-loss Kerr-driven optical filament in Krypton gas is experimentally reported in the ultraviolet. The experimental findings are supported by ab initio quantum calculations describing the atomic optical response. Higher-order Kerr effect induced by three-photon resonant transitions is identified as the underlying physical mechanism responsible for the intensity stabilization during the filamentation process, while ionization plays only a minor role. This result goes beyond the commonly-admitted paradigm of filamentation, in which ionization is a necessary condition of the filament intensity clamping. At resonance, it is also experimentally demonstrated that the filament length is greatly extended because of a strong decrease of the optical losses

    Harmonic generation and filamentation: when secondary radiations have primary consequences

    Full text link
    In this Letter, it is experimentally and theoretically shown that weak odd harmonics generated during the propagation of an infrared ultrashort ultra-intense pulse unexpectedly modify the nonlinear properties of the medium and lead to a strong modification of the propagation dynamics. This result is in contrast with all current state-of-the-art propagation model predictions, in which secondary radiations, such as third harmonic, are expected to have a negligible action upon the fundamental pulse propagation. By analysing full three-dimensional ab initio quantum calculations describing the microscopic atomic optical response, we have identified a fundamental mechanism resulting from interferences between a direct ionization channel and a channel involving one single ultraviolet photon. This mechanism is responsible for wide refractive index modifications in relation with significant variation of the ionization rate. This work paves the way to the full physical understanding of the filamentation mechanism and could lead to unexplored phenomena, such as coherent control of the filamentation by harmonic seeding.Comment: 7 pages, 5 figure

    Neutral hydrogen in galaxy halos at the peak of the cosmic star formation history

    Get PDF
    We use high-resolution cosmological zoom-in simulations from the FIRE project to make predictions for the covering fractions of neutral hydrogen around galaxies at z=2-4. These simulations resolve the interstellar medium of galaxies and explicitly implement a comprehensive set of stellar feedback mechanisms. Our simulation sample consists of 16 main halos covering the mass range M_h~10^9-6x10^12 Msun at z=2, including 12 halos in the mass range M_h~10^11-10^12 Msun corresponding to Lyman break galaxies (LBGs). We process our simulations with a ray tracing method to compute the ionization state of the gas. Galactic winds increase the HI covering fractions in galaxy halos by direct ejection of cool gas from galaxies and through interactions with gas inflowing from the intergalactic medium. Our simulations predict HI covering fractions for Lyman limit systems (LLSs) consistent with measurements around z~2-2.5 LBGs; these covering fractions are a factor ~2 higher than our previous calculations without galactic winds. The fractions of HI absorbers arising in inflows and in outflows are on average ~50% but exhibit significant time variability, ranging from ~10% to ~90%. For our most massive halos, we find a factor ~3 deficit in the LLS covering fraction relative to what is measured around quasars at z~2, suggesting that the presence of a quasar may affect the properties of halo gas on ~100 kpc scales. The predicted covering fractions, which decrease with time, peak at M_h~10^11-10^12 Msun, near the peak of the star formation efficiency in dark matter halos. In our simulations, star formation and galactic outflows are highly time dependent; HI covering fractions are also time variable but less so because they represent averages over large areas.Comment: 20 pages, including 11 figures. MNRAS, in pres

    Estimation non paramétrique des quantiles de crue par la méthode des noyaux

    Get PDF
    La détermination du débit de crue d'une période de retour donnée nécessite l'estimation de la distribution des crues annuelles. L'utilisation des distributions non paramétriques - comme alternative aux lois statistiques - est examinée dans cet ouvrage. Le principal défi dans l'estimation par la méthode des noyaux réside dans le calcul du paramètre qui détermine le degré de lissage de la densité non paramétrique. Nous avons comparé plusieurs méthodes et avons retenu la méthode plug-in et la méthode des moindres carrés avec validation croisée comme les plus prometteuses.Plusieurs conclusions intéressantes ont été tirées de cette étude. Entre autres, pour l'estimation des quantiles de crue, il semble préférable de considérer des estimateurs basés directement sur la fonction de distribution plutôt que sur la fonction de densité. Une comparaison de la méthode plug-in à l'ajustement de trois lois statistiques a permis de conclure que la méthode des noyaux représente une alternative intéressante aux méthodes paramétriques traditionnelles.Traditional flood frequency analysis involves the fitting of a statistical distribution to observed annual peak flows. The choice of statistical distribution is crucial, since it can have significant impact on design flow estimates. Unfortunately, it is often difficult to determine in an objective way which distribution is the most appropriate.To avoid the inherent arbitrariness associated with the choice of distribution in parametric frequency analysis, one can employ a method based on nonparametric density estimation. Although potentially subject to larger standard error of quantile estimates, the use of nonparametric densities eliminates the need for selecting a particular distribution and the potential bias associated with a wrong choice.The kernel method is a conceptually simple approach, similar in nature to a smoothed histogram. The critical parameter in kernel estimation is the smoothing parameter that determines the degree of smoothing. Methods for estimating the smoothing parameter have already been compared in a number of statistical papers. The novelty of our work is the particular emphasis on quantile estimation, in particular the estimation of quantiles outside the range of observed data. The flood estimation problem is unique in this sense and has been the motivating factor for this study.Seven methods for estimating the smoothing parameter are compared in the paper. All methods are based on some goodness-of-fit measures. More specifically, we considered the least-squares cross-validation method, the maximum likelihood cross-validation method, Adamowski's (1985) method, a plug-in method developed by Altman and Leger (1995) and modified by the authors (Faucher et al., 2001), Breiman's goodness-of-fit criterion method (Breiman, 1977), the variable-kernel maximum likelihood method, and the variable-kernel least-squares cross-validation method.The estimation methods can be classified according to whether they are based on fixed or variable kernels, and whether they are based on the goodness-of-fit of the density function or cumulative distribution function.The quality of the different estimation methods was explored in a Monte Carlo study. Hundred (100) samples of sizes 10, 20, 50, and 100 were simulated from an LP3 distribution. The nonparametric estimation methods were then applied to each of the simulated samples, and quantiles with return period 10, 20, 50, 100, 200, and 1000 were estimated. Bias and root-mean square error of quantile estimates were the key figures used to compare methods. The results of the study can be summarized as follows :1. Comparison of kernels. The literature reports that the kernel choice is relatively unimportant compared to the choice of the smoothing parameter. To determine whether this assertion also holds in the case of the estimation of large quantiles outside the range of data, we compared six different kernel candidates. We found no major differences between the biweight, the Normal, the Epanechnikov, and the EV1 kernels. However, the rectangular and the Cauchy kernel should be avoided.2. Comparison of sample size. The quality of estimates, whether parametric or nonparametric, deteriorates as sample size decreases. To examine the degree of sensitivity to sample size, we compared estimates of the 200-year event obtained by assuming a GEV distribution and a nonparametric density estimated by maximum likelihood cross-validation. The main conclusion is that the root mean square error for the parametric model (GEV) is more sensitive to sample size than the nonparametric model. 3. Comparison of estimators of the smoothing parameter. Among the methods considered in the study, the plug-in method, developed by Altman and Leger (1995) and modified by the authors (Faucher et al. 2001), turned out to perform the best along with the least-squares cross-validation method which had a similar performance. Adamowski's method had to be excluded, because it consistently failed to converge. The methods based on variable kernels generally did not perform as well as the fixed kernel methods.4. Comparison of density-based and cumulative distribution-based methods. The only cumulative distribution-based method considered in the comparison study was the plug-in method. Adamowski's method is also based on the cumulative distribution function, but was rejected for the reasons mentioned above. Although the plug-in method did well in the comparison, it is not clear whether this can be attributed to the fact that it is based on estimation of the cumulative distribution function. However, one could hypothesize that when the objective is to estimate quantiles, a method that emphasizes the cumulative distribution function rather than the density should have certain advantages. 5. Comparison of parametric and nonparametric methods. Nonparametric methods were compared with conventional parametric methods. The LP3, the 2-parameter lognormal, and the GEV distributions were used to fit the simulated samples. It was found that nonparametric methods perform quite similarly to the parametric methods. This is a significant result, because data were generated from an LP3 distribution so one would intuitively expect the LP3 model to be superior which however was not the case. In actual applications, flood distributions are often irregular and in such cases nonparametric methods would likely be superior to parametric methods

    Higher-order Kerr terms allow ionization-free filamentation in gases

    Full text link
    We show that higher-order nonlinear indices (n4n_4, n6n_6, n8n_8, n10n_{10}) provide the main defocusing contribution to self-channeling of ultrashort laser pulses in air and Argon at 800 nm, in contrast with the previously accepted mechanism of filamentation where plasma was considered as the dominant defocusing process. Their consideration allows to reproduce experimentally observed intensities and plasma densities in self-guided filaments.Comment: 11 pages, 6 figures (11 panels

    Subcycle engineering of laser filamentation in gas by harmonic seeding

    Full text link
    Manipulating at will the propagation dynamics of high power laser pulses is a long-standing dream whose accomplishment would lead to the control of a plethora of fascinating physical phenomena emerging from laser-matter interaction. The present work represents a significant step towards such an ideal control by manipulating the nonlinear optical properties of the gas medium at the quantum level. This is accomplished by engineering the intense laser pulse experiencing filamentation at the subcycle level with a relatively weak (about 1%) third-harmonic radiation. The control results from quantum interferences between a single and a two-color (mixing the fundamental frequency with its 3rd harmonic) ionization channel. This mechanism, which depends on the relative phase between the two electric fields, is responsible for wide refractive index modifications in relation with significant enhancement or suppression of the ionization rate. As a first application, we demonstrate the production and control of an axially modulated plasma channel that could be used for quasi-phase matched laser wakefield acceleration.Comment: 7 pages, 4 figure
    corecore