2,402 research outputs found
Scaling properties of velocity and temperature spectra above the surface friction layer in a convective atmospheric boundary layer
International audienceWe report velocity and temperature spectra measured at nine levels from 1.42 meters up to 25.7 m over a smooth playa in Western Utah. Data are from highly convective conditions when the magnitude of the Obukhov length (our proxy for the depth of the surface friction layer) was less than 2 m. Our results are somewhat similar to the results reported from the Minnesota experiment of Kaimal et al. (1976), but show significant differences in detail. Our velocity spectra show no evidence of buoyant production of kinetic energy at at the scale of the thermal structures. We interpret our velocity spectra to be the result of outer eddies interacting with the ground, not "local free convection". We observe that velocity spectra represent the spectral distribution of the kinetic energy of the turbulence, so we use energy scales based on total turbulence energy in the convective boundary layer (CBL) to collapse our spectra. For the horizontal velocity spectra this scale is (zi ?o)2/3, where zi is inversion height and ?o is the dissipation rate in the bulk CBL. This scale functionally replaces the Deardorff convective velocity scale. Vertical motions are blocked by the ground, so the outer eddies most effective in creating vertical motions come from the inertial subrange of the outer turbulence. We deduce that the appropriate scale for the peak region of the vertical velocity spectra is (z ?o)2/3 where z is height above ground. Deviations from perfect spectral collapse under these scalings at large and small wavenumbers are explained in terms of the energy transport and the eddy structures of the flow. We find that the peaks of the temperature spectra collapse when wavenumbers are scaled using (z1/2 zi1/2). That is, the lengths of the thermal structures depend on both the lengths of the transporting eddies, ~9z, and the progressive aggregation of the plumes with height into the larger-scale structures of the CBL. This aggregation depends, in top-down fashion, on zi. The whole system is therefore highly organized, with even the smallest structures conforming to the overall requirements of the whole flow
Boosting Monte Carlo simulations of spin glasses using autoregressive neural networks
The autoregressive neural networks are emerging as a powerful computational
tool to solve relevant problems in classical and quantum mechanics. One of
their appealing functionalities is that, after they have learned a probability
distribution from a dataset, they allow exact and efficient sampling of typical
system configurations. Here we employ a neural autoregressive distribution
estimator (NADE) to boost Markov chain Monte Carlo (MCMC) simulations of a
paradigmatic classical model of spin-glass theory, namely the two-dimensional
Edwards-Anderson Hamiltonian. We show that a NADE can be trained to accurately
mimic the Boltzmann distribution using unsupervised learning from system
configurations generated using standard MCMC algorithms. The trained NADE is
then employed as smart proposal distribution for the Metropolis-Hastings
algorithm. This allows us to perform efficient MCMC simulations, which provide
unbiased results even if the expectation value corresponding to the probability
distribution learned by the NADE is not exact. Notably, we implement a
sequential tempering procedure, whereby a NADE trained at a higher temperature
is iteratively employed as proposal distribution in a MCMC simulation run at a
slightly lower temperature. This allows one to efficiently simulate the
spin-glass model even in the low-temperature regime, avoiding the divergent
correlation times that plague MCMC simulations driven by local-update
algorithms. Furthermore, we show that the NADE-driven simulations quickly
sample ground-state configurations, paving the way to their future utilization
to tackle binary optimization problems.Comment: 13 pages, 14 figure
On the Sets of Real Numbers Recognized by Finite Automata in Multiple Bases
This article studies the expressive power of finite automata recognizing sets
of real numbers encoded in positional notation. We consider Muller automata as
well as the restricted class of weak deterministic automata, used as symbolic
set representations in actual applications. In previous work, it has been
established that the sets of numbers that are recognizable by weak
deterministic automata in two bases that do not share the same set of prime
factors are exactly those that are definable in the first order additive theory
of real and integer numbers. This result extends Cobham's theorem, which
characterizes the sets of integer numbers that are recognizable by finite
automata in multiple bases.
In this article, we first generalize this result to multiplicatively
independent bases, which brings it closer to the original statement of Cobham's
theorem. Then, we study the sets of reals recognizable by Muller automata in
two bases. We show with a counterexample that, in this setting, Cobham's
theorem does not generalize to multiplicatively independent bases. Finally, we
prove that the sets of reals that are recognizable by Muller automata in two
bases that do not share the same set of prime factors are exactly those
definable in the first order additive theory of real and integer numbers. These
sets are thus also recognizable by weak deterministic automata. This result
leads to a precise characterization of the sets of real numbers that are
recognizable in multiple bases, and provides a theoretical justification to the
use of weak automata as symbolic representations of sets.Comment: 17 page
The Validity of a Novel Staged Exercise Test for Measuring Lactate Metabolism and Performance in Cyclists
Several types of lactate threshold (Tlac) protocols have been developed over the years to maximize accuracy and reliability while maintaining ease of measurement and application to training and performance. PURPOSE: The purpose of this study was to determine the validity of a novel staged maximal lactate steady state exercise test (sMLSS) in predicting the MLSS using the Lactate Plus® (Nova Biomedical, Waltham, MA) analyzer. METHODS: Blood lactate concentration (BLC) was measured in duplicate for all tests. Seven trained cyclists (20 miles per week) performed a V̇O2max test starting at 100W and increasing by 30W every three minutes until volitional fatigue. Lactate threshold was defined as the previous workload to a 2 mmol•L-1 increase in BLC. Next, the sMLSS test was performed starting at the Tlac workload, determined previously, then increasing 10W every 15 minutes for a total of three stages. BLC was measured every 3 minutes. MLSS was predicted by visual inspection and defined as \u3c 1.0 mmol•L-1 increase in the final 6 minutes of the stage. Finally, cyclists then performed two to six MLSS exercise tests, adjusting by 5W depending on lactate response, to validate the sMLSS. MLSS was determined at the maximal workload with \u3c 1 mmol•L-1 increase in BLC in the final 20 minutes. Dependent T-test and Pearson correlation coefficient was used to determine reliability between lactate trials. Bland-Altman plots, One-way ANOVA, and regression analyses were used to analyze differences between the types of exercise tests. RESULTS: There were no significant differences for duplicate BLC trials for all tests (p= 0.21; r=0.982). The sMLSS was significantly correlated with the MLSS workload and percentage of max workload (r = 0.997; p=0.001, r = 0.978, p=0.01), respectively. There was no bias noted between sMLSS and MLSS protocols for predicting lactate accumulation. CONCLUSION: This novel protocol was determined to be a valid and efficient means determining lactate performance in recreationally trained cyclists. The sMLSS was effective at reducing testing time from 12 days to 3 days
An updated analysis of NN elastic scattering data to 1.6 GeV
An energy-dependent and set of single-energy partial-wave analyses of
elastic scattering data have been completed. The fit to 1.6~GeV has been
supplemented with a low-energy analysis to 400 MeV. Using the low-energy fit,
we study the sensitivity of our analysis to the choice of coupling
constant. We also comment on the possibility of fitting data alone. These
results are compared with those found in the recent Nijmegen analyses. (Figures
may be obtained from the authors upon request.)Comment: 17 pages of text, VPI-CAPS-7/
The DDO IVC Distance Project: Survey Description and the Distance to G139.6+47.6
We present a detailed analysis of the distance determination for one
intermediate Velocity Cloud (IVC G139.6+47.6) from the ongoing DDO IVC Distance
Project. Stars along the line of sight to G139.6+47.6 are examined for the
presence of sodium absorption attributable to the cloud, and the distance
bracket is established by astrometric and spectroscopic parallax measurements
of demonstrated foreground and background stars. We detail our strategy
regarding target selection, observational setup, and analysis of the data,
including a discussion of wavelength calibration and sky subtraction
uncertainties. We find a distance estimate of 129 (+/- 10) pc for the lower
limit and 257 (+211-33) pc for the upper limit. Given the high number of stars
showing absorption due to this IVC, we also discuss the small-scale covering
factor of the cloud and the likely significance of non-detections for
subsequent observations of this and other similar IVC's. Distance measurements
of the remaining targets in the DDO IVC project will be detailed in a companion
paper.Comment: 10 pages, 6 figures, LaTe
Biomass burning and pollution aerosol over North America: Organic components and their influence on spectral optical properties and humidification response
Thermal analysis of aerosol size distributions provided size resolved volatility up to temperatures of 400°C during extensive flights over North America (NA) for the INTEX/ICARTT experiment in summer 2004. Biomass burning and pollution plumes identified from trace gas measurements were evaluated for their aerosol physiochemical and optical signatures. Measurements of soluble ionic mass and refractory black carbon (BC) mass, inferred from light absorption, were combined with volatility to identify organic carbon at 400°C (VolatileOC) and the residual or refractory organic carbon, RefractoryOC. This approach characterized distinct constituent mass fractions present in biomass burning and pollution plumes every 5–10 min. Biomass burning, pollution and dust aerosol could be stratified by their combined spectral scattering and absorption properties. The “nonplume” regional aerosol exhibited properties dominated by pollution characteristics near the surface and biomass burning aloft. VolatileOC included most water-soluble organic carbon. RefractoryOC dominated enhanced shortwave absorption in plumes from Alaskan and Canadian forest fires. The mass absorption efficiency of this RefractoryOC was about 0.63 m2 g−1 at 470 nm and 0.09 m2 g−1 at 530 nm. Concurrent measurements of the humidity dependence of scattering, γ, revealed the OC component to be only weakly hygroscopic resulting in a general decrease in γ with increasing OC mass fractions. Under ambient humidity conditions, the systematic relations between physiochemical properties and γ lead to a well-constrained dependency on the absorption per unit dry mass for these plume types that may be used to challenge remotely sensed and modeled optical properties
Switching model with two habitats and a predator involving group defence
Switching model with one predator and two prey species is considered. The
prey species have the ability of group defence. Therefore, the predator will be
attracted towards that habitat where prey are less in number. The stability
analysis is carried out for two equilibrium values. The theoretical results are
compared with the numerical results for a set of values. The Hopf bifuracation
analysis is done to support the stability results
Church-Rosser Systems, Codes with Bounded Synchronization Delay and Local Rees Extensions
What is the common link, if there is any, between Church-Rosser systems,
prefix codes with bounded synchronization delay, and local Rees extensions? The
first obvious answer is that each of these notions relates to topics of
interest for WORDS: Church-Rosser systems are certain rewriting systems over
words, codes are given by sets of words which form a basis of a free submonoid
in the free monoid of all words (over a given alphabet) and local Rees
extensions provide structural insight into regular languages over words. So, it
seems to be a legitimate title for an extended abstract presented at the
conference WORDS 2017. However, this work is more ambitious, it outlines some
less obvious but much more interesting link between these topics. This link is
based on a structure theory of finite monoids with varieties of groups and the
concept of local divisors playing a prominent role. Parts of this work appeared
in a similar form in conference proceedings where proofs and further material
can be found.Comment: Extended abstract of an invited talk given at WORDS 201
- …
