454 research outputs found

    Measurement Invariance, Entropy, and Probability

    Full text link
    We show that the natural scaling of measurement for a particular problem defines the most likely probability distribution of observations taken from that measurement scale. Our approach extends the method of maximum entropy to use measurement scale as a type of information constraint. We argue that a very common measurement scale is linear at small magnitudes grading into logarithmic at large magnitudes, leading to observations that often follow Student's probability distribution which has a Gaussian shape for small fluctuations from the mean and a power law shape for large fluctuations from the mean. An inverse scaling often arises in which measures naturally grade from logarithmic to linear as one moves from small to large magnitudes, leading to observations that often follow a gamma probability distribution. A gamma distribution has a power law shape for small magnitudes and an exponential shape for large magnitudes. The two measurement scales are natural inverses connected by the Laplace integral transform. This inversion connects the two major scaling patterns commonly found in nature. We also show that superstatistics is a special case of an integral transform, and thus can be understood as a particular way in which to change the scale of measurement. Incorporating information about measurement scale into maximum entropy provides a general approach to the relations between measurement, information and probability

    General heatbath algorithm for pure lattice gauge theory

    Full text link
    A heatbath algorithm is proposed for pure SU(N) lattice gauge theory based on the Manton action of the plaquette element for general gauge group N. Comparison is made to the Metropolis thermalization algorithm using both the Wilson and Manton actions. The heatbath algorithm is found to outperform the Metropolis algorithm in both execution speed and decorrelation rate. Results, mostly in D=3, for N=2 through 5 at several values for the inverse coupling are presented.Comment: 9 pages, 10 figures, 1 table, major revision, final version, to appear in PR

    Differential cross section analysis in kaon photoproduction using associated legendre polynomials

    Full text link
    Angular distributions of differential cross sections from the latest CLAS data sets \cite{bradford}, for the reaction γ+pK++Λ{\gamma}+p {\to} K^{+} + {\Lambda} have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. \cite{fasano} where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We then compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.Comment: Talk given at APFB08, Depok, Indonesia, August, 19-23, 200

    A Bayesian approach to the follow-up of candidate gravitational wave signals

    Full text link
    Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.Comment: 9 page

    Results for the response function determination of the Compact Neutron Spectrometer

    Full text link
    The Compact Neutron Spectrometer (CNS) is a Joint European Torus (JET) Enhancement Project, designed for fusion diagnostics in different plasma scenarios. The CNS is based on a liquid scintillator (BC501A) which allows good discrimination between neutron and gamma radiation. Neutron spectrometry with a BC501A spectrometer requires the use of a reliable, fully characterized detector. The determination of the response matrix was carried out at the Ion Accelerator Facility (PIAF) of the Physikalisch-Technische Bundesanstalt (PTB). This facility provides several monoenergetic beams (2.5, 8, 10, 12 and 14 MeV) and a 'white field'(Emax ~17 MeV), which allows for a full characterization of the spectrometer in the region of interest (from ~1.5 MeV to ~17 MeV. The energy of the incoming neutrons was determined by the time of flight method (TOF), with time resolution in the order of 1 ns. To check the response matrix, the measured pulse height spectra were unfolded with the code MAXED and the resulting energy distributions were compared with those obtained from TOF. The CNS project required modification of the PTB BC501A spectrometer design, to replace an analog data acquisition system (NIM modules) with a digital system developed by the 'Ente per le Nuove tecnologie, l'Energia e l'Ambiente' (ENEA). Results for the new digital system were evaluated using new software developed specifically for this project.Comment: Proceedings of FNDA 201

    Bayesian feedback control of a two-atom spin-state in an atom-cavity system

    Full text link
    We experimentally demonstrate real-time feedback control of the joint spin-state of two neutral Caesium atoms inside a high finesse optical cavity. The quantum states are discriminated by their different cavity transmission levels. A Bayesian update formalism is used to estimate state occupation probabilities as well as transition rates. We stabilize the balanced two-atom mixed state, which is deterministically inaccessible, via feedback control and find very good agreement with Monte-Carlo simulations. On average, the feedback loops achieves near optimal conditions by steering the system to the target state marginally exceeding the time to retrieve information about its state.Comment: 4 pages, 4 figure

    A Parameterization Invariant Approach to the Statistical Estimation of the CKM Phase α\alpha

    Get PDF
    In contrast to previous analyses, we demonstrate a Bayesian approach to the estimation of the CKM phase α\alpha that is invariant to parameterization. We also show that in addition to {\em computing} the marginal posterior in a Bayesian manner, the distribution must also be {\em interpreted} from a subjective Bayesian viewpoint. Doing so gives a very natural interpretation to the distribution. We also comment on the effect of removing information about B00\mathcal{B}^{00}.Comment: 14 pages, 3 figures, 1 table, minor revision; to appear in JHE

    The history of mass assembly of faint red galaxies in 28 galaxy clusters since z=1.3

    Full text link
    We measure the relative evolution of the number of bright and faint (as faint as 0.05 L*) red galaxies in a sample of 28 clusters, of which 16 are at 0.50<= z<=1.27, all observed through a pair of filters bracketing the 4000 Angstrom break rest-frame. The abundance of red galaxies, relative to bright ones, is constant over all the studied redshift range, 0<z<1.3, and rules out a differential evolution between bright and faint red galaxies as large as claimed in some past works. Faint red galaxies are largely assembled and in place at z=1.3 and their deficit does not depend on cluster mass, parametrized by velocity dispersion or X-ray luminosity. Our analysis, with respect to previous one, samples a wider redshift range, minimizes systematics and put a more attention to statistical issues, keeping at the same time a large number of clusters.Comment: MNRAS, 386, 1045. Half a single sentence (in sec 4.4) change

    Fitting in a complex chi^2 landscape using an optimized hypersurface sampling

    Full text link
    Fitting a data set with a parametrized model can be seen geometrically as finding the global minimum of the chi^2 hypersurface, depending on a set of parameters {P_i}. This is usually done using the Levenberg-Marquardt algorithm. The main drawback of this algorithm is that despite of its fast convergence, it can get stuck if the parameters are not initialized close to the final solution. We propose a modification of the Metropolis algorithm introducing a parameter step tuning that optimizes the sampling of parameter space. The ability of the parameter tuning algorithm together with simulated annealing to find the global chi^2 hypersurface minimum, jumping across chi^2{P_i} barriers when necessary, is demonstrated with synthetic functions and with real data

    Symmetrization and enhancement of the continuous Morlet transform

    Full text link
    The forward and inverse wavelet transform using the continuous Morlet basis may be symmetrized by using an appropriate normalization factor. The loss of response due to wavelet truncation is addressed through a renormalization of the wavelet based on power. The spectral density has physical units which may be related to the squared amplitude of the signal, as do its margins the mean wavelet power and the integrated instant power, giving a quantitative estimate of the power density with temporal resolution. Deconvolution with the wavelet response matrix reduces the spectral leakage and produces an enhanced wavelet spectrum providing maximum resolution of the harmonic content of a signal. Applications to data analysis are discussed.Comment: 12 pages, 8 figures, 2 tables, minor revision, final versio
    corecore