2,654 research outputs found
D26. Stratégies des producteurs et conditions pour accroître l'utilisation du foni en tant que processus : projet n°. 015403 FONIO. Amélioration de la qualité et de la compétitivité de la filière fonio en Afrique de l'Ouest
Modeling biomass flows at the farm level: a discussion support tool for farmers.
Many simulation models that are used to assess the impact of mixed farming systems have a high level of complexity that is not suitable for teaching farmers about the impacts of their practices.DOI: 10.1051/agro/2009047
Ising thin films with modulations and surface defects
Properties of magnetic films are studied in the framework of Ising models. In
particular, we discuss critical phenomena of ferromagnetic Ising films with
straight lines of magnetic adatoms and straight steps on the surface as well as
phase diagrams of the axial next-nearest neighbour Ising (ANNNI) model for thin
films exhibiting various spatially modulated phases.Comment: 6 pages, 4 figures include
Finite-size scaling in thin Fe/Ir(100) layers
The critical temperature of thin Fe layers on Ir(100) is measured through
M\"o{\ss}bauer spectroscopy as a function of the layer thickness. From a
phenomenological finite-size scaling analysis, we find an effective shift
exponent lambda = 3.15 +/- 0.15, which is twice as large as the value expected
from the conventional finite-size scaling prediction lambda=1/nu, where nu is
the correlation length critical exponent. Taking corrections to finite-size
scaling into account, we derive the effective shift exponent
lambda=(1+2\Delta_1)/nu, where Delta_1 describes the leading corrections to
scaling. For the 3D Heisenberg universality class, this leads to lambda = 3.0
+/- 0.1, in agreement with the experimental data. Earlier data by Ambrose and
Chien on the effective shift exponent in CoO films are also explained.Comment: Latex, 4 pages, with 2 figures, to appear in Phys. Rev. Lett
Recommended from our members
Bollène-2002 experiment: radar quantitative precipitation estimation in the Cévennes–Vivarais region, France
The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type
Mechanical behavior of recrystallized Zircaloy-4 under monotonic loading at room temperature: Tests and simplified anisotropic modeling
Mechanical behavior of recrystallized Zircaloy-4 was studied at room temperature in the rolling-transverse plane of a thin sheet. Uniaxial constant elongation rate tests (CERTs) were performed along with creep tests, over a wide range of strain rates. Based on a simplified formulation, different sets of parameters for an anisotropic viscoplastic model were found to fit the stress–strain curves. Notched specimen tensile tests were carried out with a digital image correlation (DIC) technique in order to determine the strain field evolution. From these measurements and the determination of Lankford coefficients, the most consistent model was selected and simulated data were successfully compared with the experimental observations
Bayesian Parameter Estimation for Latent Markov Random Fields and Social Networks
Undirected graphical models are widely used in statistics, physics and
machine vision. However Bayesian parameter estimation for undirected models is
extremely challenging, since evaluation of the posterior typically involves the
calculation of an intractable normalising constant. This problem has received
much attention, but very little of this has focussed on the important practical
case where the data consists of noisy or incomplete observations of the
underlying hidden structure. This paper specifically addresses this problem,
comparing two alternative methodologies. In the first of these approaches
particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently
explore the parameter space, combined with the exchange algorithm (Murray et
al., 2006) for avoiding the calculation of the intractable normalising constant
(a proof showing that this combination targets the correct distribution in
found in a supplementary appendix online). This approach is compared with
approximate Bayesian computation (Pritchard et al., 1999). Applications to
estimating the parameters of Ising models and exponential random graphs from
noisy data are presented. Each algorithm used in the paper targets an
approximation to the true posterior due to the use of MCMC to simulate from the
latent graphical model, in lieu of being able to do this exactly in general.
The supplementary appendix also describes the nature of the resulting
approximation.Comment: 26 pages, 2 figures, accepted in Journal of Computational and
Graphical Statistics (http://www.amstat.org/publications/jcgs.cfm
A population Monte Carlo scheme with transformed weights and its application to stochastic kinetic models
This paper addresses the problem of Monte Carlo approximation of posterior
probability distributions. In particular, we have considered a recently
proposed technique known as population Monte Carlo (PMC), which is based on an
iterative importance sampling approach. An important drawback of this
methodology is the degeneracy of the importance weights when the dimension of
either the observations or the variables of interest is high. To alleviate this
difficulty, we propose a novel method that performs a nonlinear transformation
on the importance weights. This operation reduces the weight variation, hence
it avoids their degeneracy and increases the efficiency of the importance
sampling scheme, specially when drawing from a proposal functions which are
poorly adapted to the true posterior.
For the sake of illustration, we have applied the proposed algorithm to the
estimation of the parameters of a Gaussian mixture model. This is a very simple
problem that enables us to clearly show and discuss the main features of the
proposed technique. As a practical application, we have also considered the
popular (and challenging) problem of estimating the rate parameters of
stochastic kinetic models (SKM). SKMs are highly multivariate systems that
model molecular interactions in biological and chemical problems. We introduce
a particularization of the proposed algorithm to SKMs and present numerical
results.Comment: 35 pages, 8 figure
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
A Search for Selectrons and Squarks at HERA
Data from electron-proton collisions at a center-of-mass energy of 300 GeV
are used for a search for selectrons and squarks within the framework of the
minimal supersymmetric model. The decays of selectrons and squarks into the
lightest supersymmetric particle lead to final states with an electron and
hadrons accompanied by large missing energy and transverse momentum. No signal
is found and new bounds on the existence of these particles are derived. At 95%
confidence level the excluded region extends to 65 GeV for selectron and squark
masses, and to 40 GeV for the mass of the lightest supersymmetric particle.Comment: 13 pages, latex, 6 Figure
- …
