519 research outputs found
Constraints on Higgs Properties and SUSY Partners in the pMSSM
Direct searches for superpartners and precision measurements of the
properties of the GeV Higgs boson lead to important inter-dependent
constraints on the underlying parameter space of the MSSM. The 19/20-parameter
p(henomenological)MSSM offers a flexible framework for the study of a wide
variety of both Higgs and SUSY phenomena at the LHC and elsewhere. Within this
scenario we address the following questions: `What will potentially null
searches for SUSY at the LHC tell us about the possible properties of the Higgs
boson?' and, conversely, `What do precision measurements of the properties of
the Higgs tell us about the possible properties of the various superpartners?'
Clearly the answers to such questions will be functions of both the collision
energy of the LHC as well as the accumulated integrated luminosity. We address
these questions employing several sets of pMSSM models having either neutralino
or gravitino LSPs, making use of the ATLAS SUSY analyses at the 7/8 TeV LHC as
well as planned SUSY and Higgs analyses at the 14 TeV LHC and the ILC. Except
for theoretical uncertainties that remain to be accounted for in the ratios of
SUSY and SM couplings, we demonstrate that Higgs coupling measurements at the
14 TeV LHC, and particularly at the 500 GeV ILC, will be sensitive to regions
of the pMSSM model space that are not accessible to direct SUSY searches.Comment: 23 pages, 9 figures. Contributed to the Community Summer Study 2013,
Minneapolis, MN July 29 - August 6, 201
Identifying Finite-Time Coherent Sets from Limited Quantities of Lagrangian Data
A data-driven procedure for identifying the dominant transport barriers in a
time-varying flow from limited quantities of Lagrangian data is presented. Our
approach partitions state space into pairs of coherent sets, which are sets of
initial conditions chosen to minimize the number of trajectories that "leak"
from one set to the other under the influence of a stochastic flow field during
a pre-specified interval in time. In practice, this partition is computed by
posing an optimization problem, which once solved, yields a pair of functions
whose signs determine set membership. From prior experience with synthetic,
"data rich" test problems and conceptually related methods based on
approximations of the Perron-Frobenius operator, we observe that the functions
of interest typically appear to be smooth. As a result, given a fixed amount of
data our approach, which can use sets of globally supported basis functions,
has the potential to more accurately approximate the desired functions than
other functions tailored to use compactly supported indicator functions. This
difference enables our approach to produce effective approximations of pairs of
coherent sets in problems with relatively limited quantities of Lagrangian
data, which is usually the case with real geophysical data. We apply this
method to three examples of increasing complexity: the first is the double
gyre, the second is the Bickley Jet, and the third is data from numerically
simulated drifters in the Sulu Sea.Comment: 14 pages, 7 figure
A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition
The Koopman operator is a linear but infinite dimensional operator that
governs the evolution of scalar observables defined on the state space of an
autonomous dynamical system, and is a powerful tool for the analysis and
decomposition of nonlinear dynamical systems. In this manuscript, we present a
data driven method for approximating the leading eigenvalues, eigenfunctions,
and modes of the Koopman operator. The method requires a data set of snapshot
pairs and a dictionary of scalar observables, but does not require explicit
governing equations or interaction with a "black box" integrator. We will show
that this approach is, in effect, an extension of Dynamic Mode Decomposition
(DMD), which has been used to approximate the Koopman eigenvalues and modes.
Furthermore, if the data provided to the method are generated by a Markov
process instead of a deterministic dynamical system, the algorithm approximates
the eigenfunctions of the Kolmogorov backward equation, which could be
considered as the "stochastic Koopman operator" [1]. Finally, four illustrative
examples are presented: two that highlight the quantitative performance of the
method when presented with either deterministic or stochastic data, and two
that show potential applications of the Koopman eigenfunctions
Realtime market microstructure analysis: online Transaction Cost Analysis
Motivated by the practical challenge in monitoring the performance of a large
number of algorithmic trading orders, this paper provides a methodology that
leads to automatic discovery of the causes that lie behind a poor trading
performance. It also gives theoretical foundations to a generic framework for
real-time trading analysis. Academic literature provides different ways to
formalize these algorithms and show how optimal they can be from a
mean-variance, a stochastic control, an impulse control or a statistical
learning viewpoint. This paper is agnostic about the way the algorithm has been
built and provides a theoretical formalism to identify in real-time the market
conditions that influenced its efficiency or inefficiency. For a given set of
characteristics describing the market context, selected by a practitioner, we
first show how a set of additional derived explanatory factors, called anomaly
detectors, can be created for each market order. We then will present an online
methodology to quantify how this extended set of factors, at any given time,
predicts which of the orders are underperforming while calculating the
predictive power of this explanatory factor set. Armed with this information,
which we call influence analysis, we intend to empower the order monitoring
user to take appropriate action on any affected orders by re-calibrating the
trading algorithms working the order through new parameters, pausing their
execution or taking over more direct trading control. Also we intend that use
of this method in the post trade analysis of algorithms can be taken advantage
of to automatically adjust their trading action.Comment: 33 pages, 12 figure
An investigation into a contactless photoplethysmographic mobile application to record heart rate post-exercise: Implications for field testing
to record post-exercise heart rate and estimate maximal aerobic capacity after the Queen’s College Step Test. It was hypothesised that the CPA may present a cost effective heart rate measurement tool for educators and practitioners with limited access to specialised laboratory equipment.
Materials and Methods: seventeen participants (eleven males and six females, 28 ± 9 years, 75.5 ± 15.5 kg, 173.6 ± 9.8 cm) had their heart rate measured immediately after the 3-min test simultaneously using the CPA, a wireless heart rate monitor (HRM) and manually via palpation of the radial artery (MAN).
Results: both the CPA and MAN measurements had high variance compared to the HRM (CV = 31 and 11% respectively, ES = 1.79 and 0.65 respectively), and there were no significant correlations between the methods. Maximal oxygen consumption was estimated 17% higher in CPA compared to HRM (p < 0.001).
Conclusions: in conclusion it is recommended that field practitioners should exercise caution and assess the accuracy of new freely available technologies if they are to be used in practice
pMSSM Benchmark Models for Snowmass 2013
We present several benchmark points in the phenomenological Minimal
Supersymmetric Standard Model (pMSSM). We select these models as experimentally
well-motivated examples of the MSSM which predict the observed Higgs mass and
dark matter relic density while evading the current LHC searches. We also use
benchmarks to generate spokes in parameter space by scaling the mass parameters
in a manner which keeps the Higgs mass and relic density approximately
constant.Comment: 10 pages, 6 figure
- …
