2,695 research outputs found
Morphic words and equidistributed sequences
The problem we consider is the following: Given an infinite word on an
ordered alphabet, construct the sequence , equidistributed on
and such that if and only if ,
where is the shift operation, erasing the first symbol of . The
sequence exists and is unique for every word with well-defined positive
uniform frequencies of every factor, or, in dynamical terms, for every element
of a uniquely ergodic subshift. In this paper we describe the construction of
for the case when the subshift of is generated by a morphism of a
special kind; then we overcome some technical difficulties to extend the result
to all binary morphisms. The sequence in this case is also constructed
with a morphism.
At last, we introduce a software tool which, given a binary morphism
, computes the morphism on extended intervals and first elements of
the equidistributed sequences associated with fixed points of
Assessment of the susceptibility of roads to flooding based on geographical information – test in a flash flood prone area (the Gard region, France)
International audienceIn flash flood prone areas, roads are often the first assets affected by inundations which make rescue operations difficult and represent a major threat to lives: almost half of the victims are car passengers trapped by floods. In the past years, the Gard region (France) road management services have realized an extensive inventory of the known road sub- mersions that occurred during the last 40 years. This inven- tory provided an unique opportunity to analyse the causes of road flooding in an area frequently affected by severe flash floods. It will be used to develop a road submersion suscep- tibility rating method, representing the first element of a road warning system.This paper presents the results of the analysis of this data set. A companion paper will show how the proposed road susceptibility rating method can be combined with dis- tributed rainfall-runoff simulations to provide accurate road submersion risk maps.The very low correlation between the various possible ex- planatory factors and the susceptibility to flooding measured by the number of past observed submersions implied the use of particular statistical analysis methods based on the general principals of the discriminant analysis.The analysis led to the definition of four susceptibility classes for river crossing road sections. Validation tests con- firmed that this classification is robust, at least in the con- sidered area. One major outcome of the analysis is that the susceptibility to flooding is rather linked to the location of the road sections than to the size of the river crossing structure (bridge or culvert)
Finite-size scaling in thin Fe/Ir(100) layers
The critical temperature of thin Fe layers on Ir(100) is measured through
M\"o{\ss}bauer spectroscopy as a function of the layer thickness. From a
phenomenological finite-size scaling analysis, we find an effective shift
exponent lambda = 3.15 +/- 0.15, which is twice as large as the value expected
from the conventional finite-size scaling prediction lambda=1/nu, where nu is
the correlation length critical exponent. Taking corrections to finite-size
scaling into account, we derive the effective shift exponent
lambda=(1+2\Delta_1)/nu, where Delta_1 describes the leading corrections to
scaling. For the 3D Heisenberg universality class, this leads to lambda = 3.0
+/- 0.1, in agreement with the experimental data. Earlier data by Ambrose and
Chien on the effective shift exponent in CoO films are also explained.Comment: Latex, 4 pages, with 2 figures, to appear in Phys. Rev. Lett
Size effect on magnetism of Fe thin films in Fe/Ir superlattices
In ferromagnetic thin films, the Curie temperature variation with the
thickness is always considered as continuous when the thickness is varied from
to atomic planes. We show that it is not the case for Fe in Fe/Ir
superlattices. For an integer number of atomic planes, a unique magnetic
transition is observed by susceptibility measurements, whereas two magnetic
transitions are observed for fractional numbers of planes. This behavior is
attributed to successive transitions of areas with and atomic planes,
for which the 's are not the same. Indeed, the magnetic correlation length
is presumably shorter than the average size of the terraces. Monte carlo
simulations are performed to support this explanation.Comment: LaTeX file with Revtex, 5 pages, 5 eps figures, to appear in Phys.
Rev. Let
Time series prediction via aggregation : an oracle bound including numerical cost
We address the problem of forecasting a time series meeting the Causal
Bernoulli Shift model, using a parametric set of predictors. The aggregation
technique provides a predictor with well established and quite satisfying
theoretical properties expressed by an oracle inequality for the prediction
risk. The numerical computation of the aggregated predictor usually relies on a
Markov chain Monte Carlo method whose convergence should be evaluated. In
particular, it is crucial to bound the number of simulations needed to achieve
a numerical precision of the same order as the prediction risk. In this
direction we present a fairly general result which can be seen as an oracle
inequality including the numerical cost of the predictor computation. The
numerical cost appears by letting the oracle inequality depend on the number of
simulations required in the Monte Carlo approximation. Some numerical
experiments are then carried out to support our findings
Electron-hadron shower discrimination in a liquid argon time projection chamber
By exploiting structural differences between electromagnetic and hadronic showers in a multivariate analysis we present an efficient Electron-Hadron discrimination algorithm for liquid argon time projection chambers, validated using Geant4 simulated data
Sequential quasi-Monte Carlo: Introduction for Non-Experts, Dimension Reduction, Application to Partly Observed Diffusion Processes
SMC (Sequential Monte Carlo) is a class of Monte Carlo algorithms for
filtering and related sequential problems. Gerber and Chopin (2015) introduced
SQMC (Sequential quasi-Monte Carlo), a QMC version of SMC. This paper has two
objectives: (a) to introduce Sequential Monte Carlo to the QMC community, whose
members are usually less familiar with state-space models and particle
filtering; (b) to extend SQMC to the filtering of continuous-time state-space
models, where the latent process is a diffusion. A recurring point in the paper
will be the notion of dimension reduction, that is how to implement SQMC in
such a way that it provides good performance despite the high dimension of the
problem.Comment: To be published in the proceedings of MCMQMC 201
A population Monte Carlo scheme with transformed weights and its application to stochastic kinetic models
This paper addresses the problem of Monte Carlo approximation of posterior
probability distributions. In particular, we have considered a recently
proposed technique known as population Monte Carlo (PMC), which is based on an
iterative importance sampling approach. An important drawback of this
methodology is the degeneracy of the importance weights when the dimension of
either the observations or the variables of interest is high. To alleviate this
difficulty, we propose a novel method that performs a nonlinear transformation
on the importance weights. This operation reduces the weight variation, hence
it avoids their degeneracy and increases the efficiency of the importance
sampling scheme, specially when drawing from a proposal functions which are
poorly adapted to the true posterior.
For the sake of illustration, we have applied the proposed algorithm to the
estimation of the parameters of a Gaussian mixture model. This is a very simple
problem that enables us to clearly show and discuss the main features of the
proposed technique. As a practical application, we have also considered the
popular (and challenging) problem of estimating the rate parameters of
stochastic kinetic models (SKM). SKMs are highly multivariate systems that
model molecular interactions in biological and chemical problems. We introduce
a particularization of the proposed algorithm to SKMs and present numerical
results.Comment: 35 pages, 8 figure
Importance Sampling for Objetive Funtion Estimations in Neural Detector Traing Driven by Genetic Algorithms
To train Neural Networks (NNs) in a supervised way, estimations of an objective function must be carried out. The value of this function decreases as the training progresses and so, the number of test observations necessary for an accurate estimation has to be increased. Consequently, the training computational cost is unaffordable for very low objective function value estimations, and the use of Importance Sampling (IS) techniques becomes convenient. The study of three different objective functions is considered, which implies the proposal of estimators of the objective function using IS techniques: the Mean-Square error, the Cross Entropy error and the Misclassification error criteria. The values of these functions are estimated by IS techniques, and the results are used to train NNs by the application of Genetic Algorithms. Results for a binary detection in Gaussian noise are provided. These results show the evolution of the parameters during the training and the performances of the proposed detectors in terms of error probability and Receiver Operating Characteristics curves. At the end of the study, the obtained results justify the convenience of using IS in the training
Global parameter identification of stochastic reaction networks from single trajectories
We consider the problem of inferring the unknown parameters of a stochastic
biochemical network model from a single measured time-course of the
concentration of some of the involved species. Such measurements are available,
e.g., from live-cell fluorescence microscopy in image-based systems biology. In
addition, fluctuation time-courses from, e.g., fluorescence correlation
spectroscopy provide additional information about the system dynamics that can
be used to more robustly infer parameters than when considering only mean
concentrations. Estimating model parameters from a single experimental
trajectory enables single-cell measurements and quantification of cell--cell
variability. We propose a novel combination of an adaptive Monte Carlo sampler,
called Gaussian Adaptation, and efficient exact stochastic simulation
algorithms that allows parameter identification from single stochastic
trajectories. We benchmark the proposed method on a linear and a non-linear
reaction network at steady state and during transient phases. In addition, we
demonstrate that the present method also provides an ellipsoidal volume
estimate of the viable part of parameter space and is able to estimate the
physical volume of the compartment in which the observed reactions take place.Comment: Article in print as a book chapter in Springer's "Advances in Systems
Biology
- …
