6,487 research outputs found
Second-order critical lines of spin-S Ising models in a splitting field with Grassmann techniques
We propose a method to study the second-order critical lines of classical
spin- Ising models on two-dimensional lattices in a crystal or splitting
field, using an exact expression for the bare mass of the underlying field
theory. Introducing a set of anticommuting variables to represent the partition
function, we derive an exact and compact expression for the bare mass of the
model including all local multi-fermions interactions. By extension of the
Ising and Blume-Capel models, we extract the free energy singularities in the
low momentum limit corresponding to a vanishing bare mass. The loci of these
singularities define the critical lines depending on the spin S, in good
agreement with previous numerical estimations. This scheme appears to be
general enough to be applied in a variety of classical Hamiltonians
Time series segmentation with shifting means hidden markov models
International audienceWe present a new family of hidden Markov models and apply these to the segmentation of hydrological and environmental time series. The proposed hidden Markov models have a discrete state space and their structure is inspired from the shifting means models introduced by Chernoff and Zacks and by Salas and Boes. An estimation method inspired from the EM algorithm is proposed, and we show that it can accurately identify multiple change-points in a time series. We also show that the solution obtained using this algorithm can serve as a starting point for a Monte-Carlo Markov chain Bayesian estimation method, thus reducing the computing time needed for the Markov chain to converge to a stationary distribution
Scale without Conformal Invariance at Three Loops
We carry out a three-loop computation that establishes the existence of scale
without conformal invariance in dimensional regularization with the MS scheme
in d=4-epsilon spacetime dimensions. We also comment on the effects of scheme
changes in theories with many couplings, as well as in theories that live on
non-conformal scale-invariant renormalization group trajectories. Stability
properties of such trajectories are analyzed, revealing both attractive and
repulsive directions in a specific example. We explain how our results are in
accord with those of Jack & Osborn on a c-theorem in d=4 (and d=4-epsilon)
dimensions. Finally, we point out that limit cycles with turning points are
unlike limit cycles with continuous scale invariance.Comment: 21 pages, 3 figures, Erratum adde
Une approche floue pour la détermination de la région d'influence d'une station hydrométrique
La notion d'appartenance partielle d'une station hydrométrique à une région hydrologique est modélisée par une fonction d'appartenance obtenue en appliquant les concepts de l'analyse floue. Les stations hydrométriques sont représentées dans des plans dont les axes sont des attributs hydrologiques et/ou physiographiques. Les régions hydrologiques sont considérées comme des sous-ensembles flous. Une méthode d'agrégation par cohérence (Iphigénie) permet d'établir des classes d'équivalence pour la relation floue "il n'y a pas d'incohérence entre les éléments d'une même classe": ce sont des classes d'équivalence qui représentent les régions floues. La fonction d'appartenance dans ce cas est stricte. Par opposition, la seconde méthode de type centres mobiles flous (ISODATA) permet d'attribuer un degré d'appartenance d'une station à une région floue dans l'intervalle [0,1]. Celle-ci reflète le degré d'appartenance de la station à un groupe donné (le nombre de groupes étant préalablement choisi de façon heuristique). Pour le cas traité (réseau hydrométrique tunisien, débits maximums annuels de crue), il s'avère cependant que le caractère flou des stations n'est pas très prononcé. Sur la base des agrégats obtenus par la méthode Iphigénie et des régions floues obtenues par ISODATA, est effectuée une estimation régionale des débits maximums de crue de période de retour 100 ans. Celle-ci est ensuite comparée à l'estimation régionale obtenue par la méthode de la région d'influence ainsi qu'à l'estimation utilisant les seules données du site, sous l'hypothèse que les populations parentes sont des lois Gamma à deux paramètres et Pareto à trois paramètres.The concept of partial membership of a hydrometric station in a hydrologic region is modeled using fuzzy sets theory. Hydrometric stations are represented in spaces of hydrologic (coefficient of variation: CV, coefficient of skewness: CS, and their counterparts based on L- moments: L-CV and L-CS) and/or physiographic attributes (surface of watershed: S, specific flow: Qs=Qmoyen/S, and a shape index: Ic). Two fuzzy clustering methods are considered.First a clustering method by coherence (Iphigénie) is considered. It is based on the principle of transitivity: if two pairs of stations (A,B) and (B,C) are known to be "close" to one another, then it is incoherent to state that A is "far" from C. Using a Euclidean distance, all pairs of stations are sorted from the closest pairs to the farthest. Then, the pairs of stations starting and ending this list are removed and classified respectively as "close" and "far". The process is then continued until an incoherence is detected. Clusters of stations are then determined from the graph of "close" stations. A disadvantage of Iphigénie is that crisp (non fuzzy) membership functions are obtained.A second method of clustering is considered (ISODATA), which consists of minimizing fuzziness of clusters as measured by an objective function, and which can assign any degree of membership between 0 to 1 to a station to reflect its partial membership in a hydrologic region. It is a generalization of the classical method of mobile centers, in which crisp clusters minimizing entropy are obtained. When using Iphigénie, the number of clusters is determined automatically by the method, but for ISODATA it must be determined beforehand.An application of both methods of clustering to the Tunisian hydrometric network (which consists of 39 stations, see Figure 1) is considered, with the objective of obtaining regional estimates of the flood frequency curves. Four planes are considered: P1: (Qs,CV), P2: (CS,CV), P3: (L-CS,L-CV), and P4: (S,Ic), based on a correlation study of the available variables (Table 1).Figures 2, 3a, 4 and 5 show the clusters obtained using Iphigénie for planes P1 through P4. Estimates of skewness (CS) being quite biased and variable for small sample sizes, it was decided to determine the influence of sample size in the clusters obtained for P2. Figure 3b shows the clusters obtained when the network is restricted to the 20 stations of the network for which at least 20 observations of maximum annual flood are available. Fewer clusters are obtained than in Figure 3, but it can be observed that the structure is the same: additional clusters appearing in Figure 3 may be obtained by breaking up certain large clusters of Figure 3b. In Figure 3c, the sample size of each of the 39 stations of the network is plotted in the plane (CS,CV), to see if extreme estimated values of CS and CV were caused by small samples. This does not seem to be the case, since many of the most extreme points correspond to long series.ISODATA was also applied to the network. Based on entropy criteria (Table 2, Figures 6a and 6b), the number of clusters for ISODATA was set to 4. It turns out that the groups obtained using ISODATA are not very fuzzy. The fuzzy groups determined by ISODATA are generally conditioned by only one variable, as shown by Figures 7a-7d, which respectively show the fuzzy clusters obtained for planes P1-P4. Only lines of iso-membership of level 0.9 were plotted to facilitate the analysis. For hydrologic spaces (P2 and P3), it is skewness (CS and L-CS) and for physiographic spaces (P1 and P4) it is surface (Qs and S). Regionalization of the 100-year return period flood is performed based on the homogeneous groups obtained (using an index-flood method), and compared to the well-known region of influence (ROI) approach, both under the hypothesis of a 2-parameter Gamma distribution and a 3-parameter Pareto distribution. For the ROI approach, the threshold corresponding to the size of the ROI of a station is taken to be the distance at which an incoherence first appeared when applying Iphigénie. Correlation of the regional estimate with a local estimation for space P1 is 0.91 for Iphigénie and 0.85 both for ISODATA and the ROI approach. Relative bias of regional estimates of the 100-year flood based on P1 is plotted on Figures 9 (Gamma distribution) and Figure 10 (Pareto distribution). The three methods considered give similar results for a Gamma distribution, but Iphigénie estimates are less biased when a Pareto distribution is used. Thus Iphigénie appears superior, in this case, to ISODATA and ROI. Values of bias and standard error for all four planes are given for Iphigénie in Table 3.Application of an index-flood regionalization approach at ungauged sites requires the estimation of mean flow (also called the flood index) from physiographic attributes. A regression study shows that the best explanatory variables are watershed surface S, the shape index Ic and the average slope of the river. In Figure 8, the observed flood index is plotted against the flood index obtained by regression. The correlation coefficient is 0.93.Iphigénie and ISODATA could also be used in conjunction with other regionalization methods. For example, when using the ROI approach, it is necessary to, quite arbitrarily, determine the ROI threshold. It has been shown that this is a byproduct of the use of Iphigénie. ISODATA is most useful for pattern identification when the data is very fuzzy, unlike the example considered in this paper. But even in the case of the Tunisian network, its application gives indications as to which variables (skewness and surface) are most useful for clustering
The -theorem and the Asymptotics of 4D Quantum Field Theory
We study the possible IR and UV asymptotics of 4D Lorentz invariant unitary
quantum field theory. Our main tool is a generalization of the
Komargodski-Schwimmer proof for the -theorem. We use this to rule out a
large class of renormalization group flows that do not asymptote to conformal
field theories in the UV and IR. We show that if the IR (UV) asymptotics is
described by perturbation theory, all beta functions must vanish faster than
as (). This implies that the
only possible asymptotics within perturbation theory is conformal field theory.
In particular, it rules out perturbative theories with scale but not conformal
invariance, which are equivalent to theories with renormalization group
pseudocycles. Our arguments hold even for theories with gravitational
anomalies. We also give a non-perturbative argument that excludes theories with
scale but not conformal invariance. This argument holds for theories in which
the stress-energy tensor is sufficiently nontrivial in a technical sense that
we make precise.Comment: 41 pages, 2 figures. v2: Arguments clarified, some side comments
corrected, connection to previous work by Jack and Osborn described,
conclusions unaffecte
Strong Enhancement of Superconducting Correlation in a Two-Component Fermion Gas
We study high-density electron-hole (e-h) systems with the electron density
slightly larger than the hole density. We find a new superconducting phase, in
which the excess electrons form Cooper pairs moving in an e-h BCS phase. The
coexistence of the e-h and e-e orders is possible because e and h have opposite
charges, whereas analogous phases are impossible in the case of two fermion
species that have the same charge or are neutral. Most strikingly, the e-h
order enhances the superconducting e-h order parameter by more than one order
of magnitude as compared with that given by the BCS formula, for the same value
of the effective e-e attractive potential \lambda^{ee}. This new phase should
be observable in an e-h system created by photoexcitation in doped
semiconductors at low temperatures.Comment: 5 pages including 5 PostScript figure
Observation of Magnetic Supercooling of the Transition to the Vortex State
We demonstrate that the transition from the high-field state to the vortex
state in a nanomagnetic disk shows the magnetic equivalent of supercooling.
This is evidence that this magnetic transition can be described in terms of a
modified Landau first-order phase transition. To accomplish this we have
measured the bulk magnetization of single magnetic disks using nanomechanical
torsional resonator torque magnetometry. This allows observation of single
vortex creation events without averaging over an array of disks or over
multiple runs.Comment: 11 pages preprint, 4 figures, accepted to New Journal of Physic
Revue bibliographique des méthodes de prévision des débits
Dans le domaine de la prévision des débits, une grande variété de méthodes sont disponibles: des modèles stochastiques et conceptuels mais aussi des approches plus novatrices telles que les réseaux de neurones artificiels, les modèles à base de règles floues, la méthode des k plus proches voisins, la régression floue et les splines de régression. Après avoir effectué une revue détaillée de ces méthodes et de leurs applications récentes, nous proposons une classification qui permet de mettre en lumière les différences mais aussi les ressemblances entre ces approches. Elles sont ensuite comparées pour les problèmes différents de la prévision à court, moyen et long terme. Les recommandations que nous effectuons varient aussi avec le niveau d'information a priori. Par exemple, lorsque l'on dispose de séries chronologiques stationnaires de longue durée, nous recommandons l'emploi de la méthode non paramétrique des k plus proches voisins pour les prévisions à court et moyen terme. Au contraire, pour la prévision à plus long terme à partir d'un nombre restreint d'observations, nous suggérons l'emploi d'un modèle conceptuel couplé à un modèle météorologique basé sur l'historique. Bien que l'emphase soit mise sur le problème de la prévision des débits, une grande partie de cette revue, principalement celle traitant des modèles empiriques, est aussi pertinente pour la prévision d'autres variables.A large number of models are available for streamflow forecasting. In this paper we classify and compare nine types of models for short, medium and long-term flow forecasting, according to six criteria: 1. validity of underlying hypotheses, 2. difficulties encountered when building and calibrating the model, 3. difficulties in computing the forecasts, 4. uncertainty modeling, 5. information required by each type of model, and 6. parameter updating. We first distinguish between empirical and conceptual models, the difference being that conceptual models correspond to simplified representations of the watershed, while empirical model only try to capture the structural relationships between inputs to the watershed and outputs, such as streamflow. Amongst empirical models, we distinguish between stochastic models, i.e. models based on the theory of probability, and non-stochastic models. Three types of stochastic models are presented: statistical regression models, Box-Jenkins models, and the nonparametric k-nearest neighbor method. Statistical linear regression is only applicable for long term forecasting (monthly flows, for example), since it requires independent and identically distributed observations. It is a simple method of forecasting, and its hypotheses can be validated a posteriori if sufficient data are available. Box-Jenkins models include linear autoregressive models (AR), linear moving average models (MA), linear autoregressive - moving average models (ARMA), periodic ARMA models (PARMA) and ARMA models with auxiliary inputs (ARMAX). They are more adapted for weekly or daily flow forecasting, since the yallow for the explicit modeling of time dependence. Efficient methods are available for designing the model and updating the parameters as more data become available. For both statistical linear regression and Box-Jenkins models, the inputs must be uncorrelated and linearly related to the output. Furthermore, the process must be stationary. When it is suspected that the inputs are correlated or have a nonlinear effect on the output, the k-nearest neighbor method may be considered. This data-based nonparametric approach simply consists in looking, among past observations of the process, for the k events which are most similar to the present situation. A forecast is then built from the flows which were observed for these k events. Obviously, this approach requires a large database and a stationary process. Furthermore, the time required to calibrate the model and compute the forecasts increases rapidly with the size of the database. A clear advantage of stochastic models is that forecast uncertainty may be quantified by constructing a confidence interval. Three types of non-stochastic empirical models are also discussed: artificial neural networks (ANN), fuzzy linear regression and multivariate adaptive regression splines (MARS). ANNs were originally designed as simple conceptual models of the brain. However, for forecasting purposes, these models can be thought of simply as a subset of non linear empirical models. In fact, the ANN model most commonly used in forecasting, a multi-layer feed-forward network, corresponds to a non linear autoregressive model (NAR). To capture the moving average components of a time series, it is necessary to use recurrent architectures. ANNs are difficult to design and calibrate, and the computation of forecasts is also complex. Fuzzy linear regression makes it possible to extract linear relationships from small data sets, with fewer hypotheses than statistical linear regression. It does not require the observations to be uncorrelated, nor does it ask for the error variance to be homogeneous. However, the model is very sensitive to outliers. Furthermore, a posteriori validation of the hypothesis of linearity is not possible for small data sets. MARS models are based on the hypothesis that time series are chaotic instead of stochastic. The main advantage of the method is its ability to model non-stationary processes. The approach is non-parametric, and therefore requires a large data set.Amongst conceptual models, we distinguish between physical models, hydraulic machines, and fuzzy rule-based systems. Most conceptual hydrologic models are hydraulic machines, in which the watershed is considered to behave like a network of reservoirs. Physical modeling of a watershed would imply using fundamental physical equations at a small scale, such as the law of conservation of mass. Given the complexity of a watershed, this can be done in practice only for water routing. Consequently, only short term flow forecasts can be obtained from a physical model, since the effects of precipitation, infiltration and evaporation must be negligible. Fuzzy rule-based systems make it possible to model the water cycle using fuzzy IF-THEN rules, such as IF it rains a lot in a short period of time, THEN there will be a large flow increase following the concentration time. Each fuzzy quantifier is modeled using a fuzzy number to take into account the uncertainty surrounding it. When sufficient data are available, the fuzzy quantifiers can be constructed from the data. In general, conceptual models require more effort to develop than empirical models. However, for exceptional events, conceptual models can often provide more realistic forecasts, since empirical models are not well suited for extrapolation.A fruitful approach is to combine conceptual and empirical models. One way of doing this, called extended streamflow prediction or ESP, is to combine a stochastic model for generating meteorological scenarios with a conceptual model of the watershed.Based on this review of flow forecasting models, we recommend for short term forecasting (hourly and daily flows) the use of the k-nearest neighbor method, Box-Jenkins models, water routing models or hydraulic machines. For medium term forecasting (weekly flows, for example), we recommend the k-nearest neighbor method and Box-Jenkins models, as well as fuzzy-rule based and ESP models. For long term forecasting (monthly flows), we recommend statistical and fuzzy regression, Box-Jenkins, MARS and ESP models. It is important to choose a type of model which is appropriate for the problem at hand and for which the information available is sufficient. Each type of model having its advantages, it can be more efficient to combine different approaches when forecasting streamflow
Emergence of skew distributions in controlled growth processes
Starting from a master equation, we derive the evolution equation for the
size distribution of elements in an evolving system, where each element can
grow, divide into two, and produce new elements. We then probe general
solutions of the evolution quation, to obtain such skew distributions as
power-law, log-normal, and Weibull distributions, depending on the growth or
division and production. Specifically, repeated production of elements of
uniform size leads to power-law distributions, whereas production of elements
with the size distributed according to the current distribution as well as no
production of new elements results in log-normal distributions. Finally,
division into two, or binary fission, bears Weibull distributions. Numerical
simulations are also carried out, confirming the validity of the obtained
solutions.Comment: 9 pages, 3 figure
- …
