4,329 research outputs found
A dynamical trichotomy for structured populations experiencing positive density-dependence in stochastic environments
Positive density-dependence occurs when individuals experience increased
survivorship, growth, or reproduction with increased population densities.
Mechanisms leading to these positive relationships include mate limitation,
saturating predation risk, and cooperative breeding and foraging. Individuals
within these populations may differ in age, size, or geographic location and
thereby structure these populations. Here, I study structured population models
accounting for positive density-dependence and environmental stochasticity i.e.
random fluctuations in the demographic rates of the population. Under an
accessibility assumption (roughly, stochastic fluctuations can lead to
populations getting small and large), these models are shown to exhibit a
dynamical trichotomy: (i) for all initial conditions, the population goes
asymptotically extinct with probability one, (ii) for all positive initial
conditions, the population persists and asymptotically exhibits unbounded
growth, and (iii) for all positive initial conditions, there is a positive
probability of asymptotic extinction and a complementary positive probability
of unbounded growth. The main results are illustrated with applications to
spatially structured populations with an Allee effect and age-structured
populations experiencing mate limitation
Quadratic optimal functional quantization of stochastic processes and numerical applications
In this paper, we present an overview of the recent developments of
functional quantization of stochastic processes, with an emphasis on the
quadratic case. Functional quantization is a way to approximate a process,
viewed as a Hilbert-valued random variable, using a nearest neighbour
projection on a finite codebook. A special emphasis is made on the
computational aspects and the numerical applications, in particular the pricing
of some path-dependent European options.Comment: 41 page
Sequential Deliberation for Social Choice
In large scale collective decision making, social choice is a normative study
of how one ought to design a protocol for reaching consensus. However, in
instances where the underlying decision space is too large or complex for
ordinal voting, standard voting methods of social choice may be impractical.
How then can we design a mechanism - preferably decentralized, simple,
scalable, and not requiring any special knowledge of the decision space - to
reach consensus? We propose sequential deliberation as a natural solution to
this problem. In this iterative method, successive pairs of agents bargain over
the decision space using the previous decision as a disagreement alternative.
We describe the general method and analyze the quality of its outcome when the
space of preferences define a median graph. We show that sequential
deliberation finds a 1.208- approximation to the optimal social cost on such
graphs, coming very close to this value with only a small constant number of
agents sampled from the population. We also show lower bounds on simpler
classes of mechanisms to justify our design choices. We further show that
sequential deliberation is ex-post Pareto efficient and has truthful reporting
as an equilibrium of the induced extensive form game. We finally show that for
general metric spaces, the second moment of of the distribution of social cost
of the outcomes produced by sequential deliberation is also bounded
Preliminary definitions for the sonographic features of synovitis in children
Objectives Musculoskeletal ultrasonography (US) has the potential to be an important tool in the assessment of disease activity in childhood arthritides. To assess pathology, clear definitions for synovitis need to be developed first. The aim of this study was to develop and validate these definitions through an international consensus process. Methods The decision on which US techniques to use, the components to be included in the definitions as well as the final wording were developed by 31 ultrasound experts in a consensus process. A Likert scale of 1-5 with 1 indicating complete disagreement and 5 complete agreement was used. A minimum of 80% of the experts scoring 4 or 5 was required for final approval. The definitions were then validated on 120 standardized US images of the wrist, MCP and tibiotalar joints displaying various degrees of synovitis at various ages. Results B-Mode and Doppler should be used for assessing synovitis in children. A US definition of the various components (i.e. synovial hypertrophy, effusion and Doppler signal within the synovium) was developed. The definition was validated on still images with a median of 89% (range 80-100) of participants scoring it as 4 or 5 on a Likert scale. Conclusions US definitions of synovitis and its elementary components covering the entire pediatric age range were successfully developed through a Delphi process and validated in a web-based still images exercise. These results provide the basis for the standardized US assessment of synovitis in clinical practice and research
Drop Traffic in Microfluidic Ladder Networks with Fore-Aft Structural Asymmetry
We investigate the dynamics of pairs of drops in microfluidic ladder networks
with slanted bypasses, which break the fore-aft structural symmetry. Our
analytical results indicate that unlike symmetric ladder networks, structural
asymmetry introduced by a single slanted bypass can be used to modulate the
relative drop spacing, enabling them to contract, synchronize, expand, or even
flip at the ladder exit. Our experiments confirm all these behaviors predicted
by theory. Numerical analysis further shows that while ladder networks
containing several identical bypasses are limited to nearly linear
transformation of input delay between drops, mixed combination of bypasses can
cause significant non-linear transformation enabling coding and decoding of
input delays.Comment: 4 pages, 5 figure
Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping
This paper presents a novel video enhancement system based on an adaptive spatio-temporal connective (ASTC) noise filter and an adaptive piecewise mapping function (APMF). For ill-exposed videos or those with much noise, we first introduce a novel local image statistic to identify impulse noise pixels, and then incorporate it into the classical bilateral filter to form ASTC, aiming to reduce the mixture of the most two common types of noises - Gaussian and impulse noises in spatial and temporal directions. After noise removal, we enhance the video contrast with APMF based on the statistical information of frame segmentation results. The experiment results demonstrate that, for diverse low-quality videos corrupted by mixed noise, underexposure, overexposure, or any mixture of the above, the proposed system can automatically produce satisfactory results
Hydrostatic pressure does not cause detectable changes to survival of human retinal ganglion
Purpose: Elevated intraocular pressure (IOP) is a major risk factor for glaucoma. One consequence of raised IOP is that ocular tissues are subjected to increased hydrostatic pressure (HP). The effect of raised HP on stress pathway signaling and retinal ganglion cell (RGC) survival in the human retina was investigated. Methods: A chamber was designed to expose cells to increased HP (constant and fluctuating). Accurate pressure control (10-100mmHg) was achieved using mass flow controllers. Human organotypic retinal cultures (HORCs) from donor eyes (<24h post mortem) were cultured in serum-free DMEM/HamF12. Increased HP was compared to simulated ischemia (oxygen glucose deprivation, OGD). Cell death and apoptosis were measured by LDH and TUNEL assays, RGC marker expression by qRT-PCR (THY-1) and RGC number by immunohistochemistry (NeuN). Activated p38 and JNK were detected by Western blot. Results: Exposure of HORCs to constant (60mmHg) or fluctuating (10-100mmHg; 1 cycle/min) pressure for 24 or 48h caused no loss of structural integrity, LDH release, decrease in RGC marker expression (THY-1) or loss of RGCs compared with controls. In addition, there was no increase in TUNEL-positive NeuN-labelled cells at either time-point indicating no increase in apoptosis of RGCs. OGD increased apoptosis, reduced RGC marker expression and RGC number and caused elevated LDH release at 24h. p38 and JNK phosphorylation remained unchanged in HORCs exposed to fluctuating pressure (10-100mmHg; 1 cycle/min) for 15, 30, 60 and 90min durations, whereas OGD (3h) increased activation of p38 and JNK, remaining elevated for 90min post-OGD. Conclusions: Directly applied HP had no detectable impact on RGC survival and stress-signalling in HORCs. Simulated ischemia, however, activated stress pathways and caused RGC death. These results show that direct HP does not cause degeneration of RGCs in the ex vivo human retina
New Mechanics of Traumatic Brain Injury
The prediction and prevention of traumatic brain injury is a very important
aspect of preventive medical science. This paper proposes a new coupled
loading-rate hypothesis for the traumatic brain injury (TBI), which states that
the main cause of the TBI is an external Euclidean jolt, or SE(3)-jolt, an
impulsive loading that strikes the head in several coupled degrees-of-freedom
simultaneously. To show this, based on the previously defined covariant force
law, we formulate the coupled Newton-Euler dynamics of brain's micro-motions
within the cerebrospinal fluid and derive from it the coupled SE(3)-jolt
dynamics. The SE(3)-jolt is a cause of the TBI in two forms of brain's rapid
discontinuous deformations: translational dislocations and rotational
disclinations. Brain's dislocations and disclinations, caused by the
SE(3)-jolt, are described using the Cosserat multipolar viscoelastic continuum
brain model.
Keywords: Traumatic brain injuries, coupled loading-rate hypothesis,
Euclidean jolt, coupled Newton-Euler dynamics, brain's dislocations and
disclinationsComment: 18 pages, 1 figure, Late
A retrospective observational study to determine baseline characteristics and early prescribing patterns for patients receiving Alirocumab in UK clinical practice
Background Alirocumab is a fully human monoclonal antibody to proprotein convertase subtilisin/kexin type 9 (PCSK9) and has been previously shown, in the phase III ODYSSEY clinical trial program, to provide significant lowering of lowdensity lipoprotein cholesterol (LDL-C) and reduction in risk of major adverse cardiovascular events. However, real-world evidence to date is limited. Objective The primary objective was to describe baseline characteristics, clinical history, and prior lipid-lowering therapy (LLT) use of patients initiated on alirocumab in UK clinical practice following publication of health technology appraisal (HTA) body recommendations. Secondary objectives included description of alirocumab use and lipid parameter outcomes over a 4-month follow-up period.
Methods In this retrospective, single-arm, observational, multicenter study, data were collected for 150 patients initiated on alirocumab.
Results Mean (standard deviation; SD) age of patients was 61.4 (10.5) years and baseline median (interquartile range; IQR) LDL-C level was 4.8 (4.2–5.8) mmol/l. Alirocumab use occurred predominantly in patients with heterozygous familial hypercholesterolemia (HeFH) (n = 100/150, 66%) and those with statin intolerance (n = 123/150, 82%). Most patients started on alirocumab 75 mg (n = 108/150 [72%]) and 35 (23.3%) were up-titrated to 150 mg. Clinically significant reductions in atherogenic lipid parameters were observed with alirocumab, including LDL-C (median [IQR] change from baseline, − 53.6% [− 62.9 to − 34.9], P < 0.001). Conclusion This study highlights the unmet need for additional LLT in patients with uncontrolled hyperlipidemia and demonstrates the clinical utility of alirocumab in early real-world practice, where dosing flexibility is an important attribute of this therapeutic option
Combination of electroweak and QCD corrections to single W production at the Fermilab Tevatron and the CERN LHC
Precision studies of the production of a high-transverse momentum lepton in
association with missing energy at hadron colliders require that electroweak
and QCD higher-order contributions are simultaneously taken into account in
theoretical predictions and data analysis. Here we present a detailed
phenomenological study of the impact of electroweak and strong contributions,
as well as of their combination, to all the observables relevant for the
various facets of the p\smartpap \to {\rm lepton} + X physics programme at
hadron colliders, including luminosity monitoring and Parton Distribution
Functions constraint, precision physics and search for new physics signals.
We provide a theoretical recipe to carefully combine electroweak and strong
corrections, that are mandatory in view of the challenging experimental
accuracy already reached at the Fermilab Tevatron and aimed at the CERN LHC,
and discuss the uncertainty inherent the combination. We conclude that the
theoretical accuracy of our calculation can be conservatively estimated to be
about 2% for standard event selections at the Tevatron and the LHC, and about
5% in the very high transverse mass/lepton transverse momentum tails. We
also provide arguments for a more aggressive error estimate (about 1% and 3%,
respectively) and conclude that in order to attain a one per cent accuracy: 1)
exact mixed corrections should be computed in
addition to the already available NNLO QCD contributions and two-loop
electroweak Sudakov logarithms; 2) QCD and electroweak corrections should be
coherently included into a single event generator.Comment: One reference added. Final version to appear in JHE
- …
