1,193 research outputs found
Human Anti-Chimeric Antibody in Children and Young Adults with Inflammatory Bowel Disease Receiving Infliximab
Electro-Magnetic Nucleon Form Factors and their Spectral Functions in Soliton Models
It is demonstrated that in simple soliton models essential features of the
electro-magnetic nucleon form factors observed over three orders of magnitude
in momentum transfer are naturally reproduced. The analysis shows that
three basic ingredients are required: an extended object, partial coupling to
vector mesons, and relativistic recoil corrections. We use for the extended
object the standard skyrmion, one vector meson propagator for both isospin
channels, and the relativistic boost to the Breit frame. Continuation to
timelike leads to quite stable results for the spectral functions in the
regime from the 2- or 3-pion threshold to about two rho masses. Especially the
onset of the continuous part of the spectral functions at threshold can be
reliably determined and there are strong analogies to the results imposed on
dispersion theoretic approaches by the unitarity constraint.Comment: 24 pages, (RevTeX), 5 PS-figures; Data points in fig.2 and
corresponding references added. Final version, to be published in Z.Physik
Portfolio selection problems in practice: a comparison between linear and quadratic optimization models
Several portfolio selection models take into account practical limitations on
the number of assets to include and on their weights in the portfolio. We
present here a study of the Limited Asset Markowitz (LAM), of the Limited Asset
Mean Absolute Deviation (LAMAD) and of the Limited Asset Conditional
Value-at-Risk (LACVaR) models, where the assets are limited with the
introduction of quantity and cardinality constraints. We propose a completely
new approach for solving the LAM model, based on reformulation as a Standard
Quadratic Program and on some recent theoretical results. With this approach we
obtain optimal solutions both for some well-known financial data sets used by
several other authors, and for some unsolved large size portfolio problems. We
also test our method on five new data sets involving real-world capital market
indices from major stock markets. Our computational experience shows that,
rather unexpectedly, it is easier to solve the quadratic LAM model with our
algorithm, than to solve the linear LACVaR and LAMAD models with CPLEX, one of
the best commercial codes for mixed integer linear programming (MILP) problems.
Finally, on the new data sets we have also compared, using out-of-sample
analysis, the performance of the portfolios obtained by the Limited Asset
models with the performance provided by the unconstrained models and with that
of the official capital market indices
A Closed-Form Solution of the Multi-Period Portfolio Choice Problem for a Quadratic Utility Function
In the present paper, we derive a closed-form solution of the multi-period
portfolio choice problem for a quadratic utility function with and without a
riskless asset. All results are derived under weak conditions on the asset
returns. No assumption on the correlation structure between different time
points is needed and no assumption on the distribution is imposed. All
expressions are presented in terms of the conditional mean vectors and the
conditional covariance matrices. If the multivariate process of the asset
returns is independent it is shown that in the case without a riskless asset
the solution is presented as a sequence of optimal portfolio weights obtained
by solving the single-period Markowitz optimization problem. The process
dynamics are included only in the shape parameter of the utility function. If a
riskless asset is present then the multi-period optimal portfolio weights are
proportional to the single-period solutions multiplied by time-varying
constants which are depending on the process dynamics. Remarkably, in the case
of a portfolio selection with the tangency portfolio the multi-period solution
coincides with the sequence of the simple-period solutions. Finally, we compare
the suggested strategies with existing multi-period portfolio allocation
methods for real data.Comment: 38 pages, 9 figures, 3 tables, changes: VAR(1)-CCC-GARCH(1,1) process
dynamics and the analysis of increasing horizon are included in the
simulation study, under revision in Annals of Operations Researc
Display of probability densities for data from a continuous distribution
Based on cumulative distribution functions, Fourier series expansion and
Kolmogorov tests, we present a simple method to display probability densities
for data drawn from a continuous distribution. It is often more efficient than
using histograms.Comment: 5 pages, 4 figures, presented at Computer Simulation Studies XXIV,
Athens, GA, 201
Theory of disk accretion onto supermassive black holes
Accretion onto supermassive black holes produces both the dramatic phenomena
associated with active galactic nuclei and the underwhelming displays seen in
the Galactic Center and most other nearby galaxies. I review selected aspects
of the current theoretical understanding of black hole accretion, emphasizing
the role of magnetohydrodynamic turbulence and gravitational instabilities in
driving the actual accretion and the importance of the efficacy of cooling in
determining the structure and observational appearance of the accretion flow.
Ongoing investigations into the dynamics of the plunging region, the origin of
variability in the accretion process, and the evolution of warped, twisted, or
eccentric disks are summarized.Comment: Mostly introductory review, to appear in "Supermassive black holes in
the distant Universe", ed. A.J. Barger, Kluwer Academic Publishers, in pres
Performance of the CMS Cathode Strip Chambers with Cosmic Rays
The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device
in the CMS endcaps. Their performance has been evaluated using data taken
during a cosmic ray run in fall 2008. Measured noise levels are low, with the
number of noisy channels well below 1%. Coordinate resolution was measured for
all types of chambers, and fall in the range 47 microns to 243 microns. The
efficiencies for local charged track triggers, for hit and for segments
reconstruction were measured, and are above 99%. The timing resolution per
layer is approximately 5 ns
Predictive functional profiling of microbial communities using 16S rRNA marker gene sequences
Profiling phylogenetic marker genes, such as the 16S rRNA gene, is a key tool for studies of microbial communities but does not provide direct evidence of a community’s functional capabilities. Here we describe PICRUSt (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States), a computational approach to predict the functional composition of a metagenome using marker gene data and a database of reference genomes. PICRUSt uses an extended ancestral-state reconstruction algorithm to predict which gene families are present and then combines gene families to estimate the composite metagenome. Using 16S information, PICRUSt recaptures key findings from the Human Microbiome Project and accurately predicts the abundance of gene families in host-associated and environmental communities, with quantifiable uncertainty. Our results demonstrate that phylogeny and function are sufficiently linked that this ‘predictive metagenomic’ approach should provide useful insights into the thousands of uncultivated microbial communities for which only marker gene surveys are currently available
Impact of technology-based interventions for children and young people with type 1 diabetes on key diabetes self-management behaviours and prerequisites: A systematic review
Background
The role of technology in the self-management of type 1 diabetes mellitus (T1DM) among children and young people is not well understood. Interventions should aim to improve key diabetes self-management behaviours (self-management of blood glucose, insulin administration, physical activity and dietary behaviours) and prerequisites (psychological outcomes and HbA1c) highlighted in the UK guidelines of the National Institute for Health and Care Excellence (NICE) for management of T1DM. The purpose was to identify evidence to assess the effectiveness of technological tools in promoting aspects of these guidelines amongst children and young people.
Methods
A systematic review of English language articles was conducted using the following databases: Web of Science, PubMed, Scopus, NUSearch, SAGE Journals, SpringerLink, Google Scholar, Science Direct, Sport Discus, Embase, Psychinfo and Cochrane Trials. Search terms included paediatric, type one diabetes, technology, intervention and various synonyms. Included studies examined interventions which supplemented usual care with a health care strategy primarily delivered through a technology-based medium (e.g. mobile phone, website, activity monitor) with the aim of engaging children and young people with T1DM directly in their diabetes healthcare. Studies did not need to include a comparator condition and could be randomised, non-randomised or cohort studies but not single-case studies.
Results
Of 30 included studies (21 RCTs), the majority measured self-monitoring of blood glucose monitoring (SMBG) frequency, clinical indicators of diabetes self-management (e.g. HbA1c) and/or psychological or cognitive outcomes. The most positive findings were associated with technology-based health interventions targeting SMBG as a behavioural outcome, with some benefits found for clinical and/or psychological diabetes self-management outcomes. Technological interventions were well accepted by children and young people. For the majority of included outcomes, clinical relevance was deemed to be little or none.
Conclusions
More research is required to assess which elements of interventions are most likely to produce beneficial behavioural outcomes. To produce clinically relevant outcomes, interventions may need to be delivered for at least 1 year and should consider targeting individuals with poorly managed diabetes. It is not possible to determine the impact of technology-based interventions on insulin administration, dietary habits and/or physical activity behaviour due to lack of evidence
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
- …
