4,407 research outputs found
Laser Doppler technology applied to atmospheric environmental operating problems
Carbon dioxide laser Doppler ground wind data were very favorably compared with data from standard anemometers. As a result of these measurements, two breadboard systems were developed for taking research data: a continuous wave velocimeter and a pulsed Doppler system. The scanning continuous wave laser Doppler velocimeter developed for detecting, tracking and measuring aircraft wake vortices was successfully tested at an airport where it located vortices to an accuracy of 3 meters at a range of 150 meters. The airborne pulsed laser Doppler system was developed to detect and measure clear air turbulence (CAT). This system was tested aboard an aircraft, but jet stream CAT was not encountered. However, low altitude turbulence in cumulus clouds near a mountain range was detected by the system and encountered by the aircraft at the predicted time
Bayesian Analysis of Inflation II: Model Selection and Constraints on Reheating
We discuss the model selection problem for inflationary cosmology. We couple
ModeCode, a publicly-available numerical solver for the primordial perturbation
spectra, to the nested sampler MultiNest, in order to efficiently compute
Bayesian evidence. Particular attention is paid to the specification of
physically realistic priors, including the parametrization of the
post-inflationary expansion and associated thermalization scale. It is
confirmed that while present-day data tightly constrains the properties of the
power spectrum, it cannot usefully distinguish between the members of a large
class of simple inflationary models. We also compute evidence using a simulated
Planck likelihood, showing that while Planck will have more power than WMAP to
discriminate between inflationary models, it will not definitively address the
inflationary model selection problem on its own. However, Planck will place
very tight constraints on any model with more than one observationally-distinct
inflationary regime -- e.g. the large- and small-field limits of the hilltop
inflation model -- and put useful limits on different reheating scenarios for a
given model.Comment: ModeCode package available from
http://zuserver2.star.ucl.ac.uk/~hiranya/ModeCode/ModeCode (requires CosmoMC
and MultiNest); to be published in PRD. Typos fixe
First report of lesions resembling red mark syndrome observed in wild-caught common dab (Limanda limanda)
Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions
Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design
Constructing smooth potentials of mean force, radial, distribution functions and probability densities from sampled data
In this paper a method of obtaining smooth analytical estimates of
probability densities, radial distribution functions and potentials of mean
force from sampled data in a statistically controlled fashion is presented. The
approach is general and can be applied to any density of a single random
variable. The method outlined here avoids the use of histograms, which require
the specification of a physical parameter (bin size) and tend to give noisy
results. The technique is an extension of the Berg-Harris method [B.A. Berg and
R.C. Harris, Comp. Phys. Comm. 179, 443 (2008)], which is typically inaccurate
for radial distribution functions and potentials of mean force due to a
non-uniform Jacobian factor. In addition, the standard method often requires a
large number of Fourier modes to represent radial distribution functions, which
tends to lead to oscillatory fits. It is shown that the issues of poor sampling
due to a Jacobian factor can be resolved using a biased resampling scheme,
while the requirement of a large number of Fourier modes is mitigated through
an automated piecewise construction approach. The method is demonstrated by
analyzing the radial distribution functions in an energy-discretized water
model. In addition, the fitting procedure is illustrated on three more
applications for which the original Berg-Harris method is not suitable, namely,
a random variable with a discontinuous probability density, a density with long
tails, and the distribution of the first arrival times of a diffusing particle
to a sphere, which has both long tails and short-time structure. In all cases,
the resampled, piecewise analytical fit outperforms the histogram and the
original Berg-Harris method.Comment: 14 pages, 15 figures. To appear in J. Chem. Phy
The length of time's arrow
An unresolved problem in physics is how the thermodynamic arrow of time
arises from an underlying time reversible dynamics. We contribute to this issue
by developing a measure of time-symmetry breaking, and by using the work
fluctuation relations, we determine the time asymmetry of recent single
molecule RNA unfolding experiments. We define time asymmetry as the
Jensen-Shannon divergence between trajectory probability distributions of an
experiment and its time-reversed conjugate. Among other interesting properties,
the length of time's arrow bounds the average dissipation and determines the
difficulty of accurately estimating free energy differences in nonequilibrium
experiments
Finding Evidence for Massive Neutrinos using 3D Weak Lensing
In this paper we investigate the potential of 3D cosmic shear to constrain
massive neutrino parameters. We find that if the total mass is substantial
(near the upper limits from LSS, but setting aside the Ly alpha limit for now),
then 3D cosmic shear + Planck is very sensitive to neutrino mass and one may
expect that a next generation photometric redshift survey could constrain the
number of neutrinos N_nu and the sum of their masses m_nu to an accuracy of
dN_nu ~ 0.08 and dm_nu ~ 0.03 eV respectively. If in fact the masses are close
to zero, then the errors weaken to dN_nu ~ 0.10 and dm_nu~0.07 eV. In either
case there is a factor 4 improvement over Planck alone. We use a Bayesian
evidence method to predict joint expected evidence for N_nu and m_nu. We find
that 3D cosmic shear combined with a Planck prior could provide `substantial'
evidence for massive neutrinos and be able to distinguish `decisively' between
many competing massive neutrino models. This technique should `decisively'
distinguish between models in which there are no massive neutrinos and models
in which there are massive neutrinos with |N_nu-3| > 0.35 and m_nu > 0.25 eV.
We introduce the notion of marginalised and conditional evidence when
considering evidence for individual parameter values within a multi-parameter
model.Comment: 9 pages, 2 Figures, 2 Tables, submitted to Physical Review
Gravitational oscillations in multidimensional anisotropic model with cosmological constant and their contributions into the energy of vacuum
Were studied classical oscillations of background metric in the
multidimensional anisotropic model of Kazner in the de-Sitter stage. Obtained
dependence of fluctuations on dimension of space-time with infinite expansion.
Stability of the model could be achieved when number of space-like dimensions
equals or more then four. Were calculated contributions to the density of
"vacuum energy", that are providing by proper oscillations of background metric
and compared with contribution of cosmological arising of particles due to
expansion. As it turned out, contribution of gravitational oscillation of
metric into density of "vacuum energy" should play significant role in the
de-Sitter stage
Direct reconstruction of the quintessence potential
We describe an algorithm which directly determines the quintessence potential
from observational data, without using an equation of state parametrisation.
The strategy is to numerically determine observational quantities as a function
of the expansion coefficients of the quintessence potential, which are then
constrained using a likelihood approach. We further impose a model selection
criterion, the Bayesian Information Criterion, to determine the appropriate
level of the potential expansion. In addition to the potential parameters, the
present-day quintessence field velocity is kept as a free parameter. Our
investigation contains unusual model types, including a scalar field moving on
a flat potential, or in an uphill direction, and is general enough to permit
oscillating quintessence field models. We apply our method to the `gold' Type
Ia supernovae sample of Riess et al. (2004), confirming the pure cosmological
constant model as the best description of current supernovae
luminosity-redshift data. Our method is optimal for extracting quintessence
parameters from future data.Comment: 9 pages RevTeX4 with lots of incorporated figure
Prediction and explanation in the multiverse
Probabilities in the multiverse can be calculated by assuming that we are
typical representatives in a given reference class. But is this class well
defined? What should be included in the ensemble in which we are supposed to be
typical? There is a widespread belief that this question is inherently vague,
and that there are various possible choices for the types of reference objects
which should be counted in. Here we argue that the ``ideal'' reference class
(for the purpose of making predictions) can be defined unambiguously in a
rather precise way, as the set of all observers with identical information
content. When the observers in a given class perform an experiment, the class
branches into subclasses who learn different information from the outcome of
that experiment. The probabilities for the different outcomes are defined as
the relative numbers of observers in each subclass. For practical purposes,
wider reference classes can be used, where we trace over all information which
is uncorrelated to the outcome of the experiment, or whose correlation with it
is beyond our current understanding. We argue that, once we have gathered all
practically available evidence, the optimal strategy for making predictions is
to consider ourselves typical in any reference class we belong to, unless we
have evidence to the contrary. In the latter case, the class must be
correspondingly narrowed.Comment: Minor clarifications adde
- …
