2,936 research outputs found
Elective Open Suprarenal Aneurysm Repair in England from 2000 to 2010 an Observational Study of Hospital Episode Statistics
Background: Open surgery is widely used as a benchmark for the results of fenestrated endovascular repair of complex abdominal aortic aneurysms (AAA). However, the existing evidence stems from single-centre experiences, and may not be reproducible in wider practice. National outcomes provide valuable information regarding the safety of suprarenal aneurysm repair.
Methods: Demographic and clinical data were extracted from English Hospital Episodes Statistics for patients undergoing elective suprarenal aneurysm repair from 1 April 2000 to 31 March 2010. Thirty-day mortality and five-year survival were analysed by logistic regression and Cox proportional hazards modeling.
Results: 793 patients underwent surgery with 14% overall 30-day mortality, which did not improve over the study period. Independent predictors of 30-day mortality included age, renal disease and previous myocardial infarction. 5-year survival was independently reduced by age, renal disease, liver disease, chronic pulmonary disease, and known metastatic solid tumour. There was significant regional variation in both 30-day mortality and 5-year survival after risk-adjustment. Regional differences in outcome were eliminated in a sensitivity analysis for perioperative outcome, conducted by restricting analysis to survivors of the first 30 days after surgery.
Conclusions: Elective suprarenal aneurysm repair was associated with considerable mortality and significant regional variation across England. These data provide a benchmark to assess the efficacy of complex endovascular repair of supra-renal aneurysms, though cautious interpretation is required due to the lack of information regarding aneurysm morphology. More detailed study is required, ideally through the mandatory submission of data to a national registry of suprarenal aneurysm repair
A review of tennis racket performance parameters
The application of advanced engineering to tennis racket design has influenced the nature of the sport. As a result, the International Tennis Federation has established rules to limit performance, with the aim of protecting the nature of the game. This paper illustrates how changes to the racket affect the player-racket system. The review integrates engineering and biomechanical issues related to tennis racket performance, covering the biomechanical characteristics of tennis strokes, tennis racket performance, the effect of racket parameters on ball rebound and biomechanical interactions. Racket properties influence the rebound of the ball. Ball rebound speed increases with frame stiffness and as string tension decreases. Reducing inter-string contacting forces increases rebound topspin. Historical trends and predictive modelling indicate swingweights of around 0.030–0.035 kg/m2 are best for high ball speed and accuracy. To fully understand the effect of their design changes, engineers should use impact conditions in their experiments, or models, which reflect those of actual tennis strokes. Sports engineers, therefore, benefit from working closely with biomechanists to ensure realistic impact conditions
Patterns of analgesic use, pain and self-efficacy: a cross-sectional study of patients attending a hospital rheumatology clinic
Background: Many people attending rheumatology clinics use analgesics and non-steroidal anti-inflammatories for persistent musculoskeletal pain. Guidelines for pain management recommend regular and pre-emptive use of analgesics to reduce the impact of pain. Clinical experience indicates that analgesics are often not used in this way. Studies exploring use of analgesics in arthritis have historically measured adherence to such medication. Here we examine patterns of analgesic use and their relationships to pain, self-efficacy and demographic factors.
Methods: Consecutive patients were approached in a hospital rheumatology out-patient clinic. Pattern of analgesic use was assessed by response to statements such as 'I always take my tablets every day.' Pain and self-efficacy (SE) were measured using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Arthritis Self-Efficacy Scale (ASES). Influence of factors on pain level and regularity of analgesic use were investigated using linear regression. Differences in pain between those agreeing and disagreeing with statements regarding analgesic use were assessed using t-tests.
Results: 218 patients (85% of attendees) completed the study. Six (2.8%) patients reported no current pain, 26 (12.3%) slight, 100 (47.4%) moderate, 62 (29.4%) severe and 17 (8.1%) extreme pain. In multiple linear regression self efficacy and regularity of analgesic use were significant (p < 0.01) with lower self efficacy and more regular use of analgesics associated with more pain.
Low SE was associated with greater pain: 40 (41.7%) people with low SE reported severe pain versus 22 (18.3%) people with high SE, p < 0.001. Patients in greater pain were significantly more likely to take analgesics regularly; 13 (77%) of those in extreme pain reported always taking their analgesics every day, versus 9 (35%) in slight pain. Many patients, including 46% of those in severe pain, adjusted analgesic use to current pain level. In simple linear regression, pain was the only variable significantly associated with regularity of analgesic use: higher levels of pain corresponded to more regular analgesic use (p = 0.003).
Conclusion: Our study confirms that there is a strong inverse relationship between self-efficacy and pain severity. Analgesics are often used irregularly by people with arthritis, including some reporting severe pain
Hierarchy measure for complex networks
Nature, technology and society are full of complexity arising from the
intricate web of the interactions among the units of the related systems (e.g.,
proteins, computers, people). Consequently, one of the most successful recent
approaches to capturing the fundamental features of the structure and dynamics
of complex systems has been the investigation of the networks associated with
the above units (nodes) together with their relations (edges). Most complex
systems have an inherently hierarchical organization and, correspondingly, the
networks behind them also exhibit hierarchical features. Indeed, several papers
have been devoted to describing this essential aspect of networks, however,
without resulting in a widely accepted, converging concept concerning the
quantitative characterization of the level of their hierarchy. Here we develop
an approach and propose a quantity (measure) which is simple enough to be
widely applicable, reveals a number of universal features of the organization
of real-world networks and, as we demonstrate, is capable of capturing the
essential features of the structure and the degree of hierarchy in a complex
network. The measure we introduce is based on a generalization of the m-reach
centrality, which we first extend to directed/partially directed graphs. Then,
we define the global reaching centrality (GRC), which is the difference between
the maximum and the average value of the generalized reach centralities over
the network. We investigate the behavior of the GRC considering both a
synthetic model with an adjustable level of hierarchy and real networks.
Results for real networks show that our hierarchy measure is related to the
controllability of the given system. We also propose a visualization procedure
for large complex networks that can be used to obtain an overall qualitative
picture about the nature of their hierarchical structure.Comment: 29 pages, 9 figures, 4 table
Coordinated optimization of visual cortical maps (I) Symmetry-based analysis
In the primary visual cortex of primates and carnivores, functional
architecture can be characterized by maps of various stimulus features such as
orientation preference (OP), ocular dominance (OD), and spatial frequency. It
is a long-standing question in theoretical neuroscience whether the observed
maps should be interpreted as optima of a specific energy functional that
summarizes the design principles of cortical functional architecture. A
rigorous evaluation of this optimization hypothesis is particularly demanded by
recent evidence that the functional architecture of OP columns precisely
follows species invariant quantitative laws. Because it would be desirable to
infer the form of such an optimization principle from the biological data, the
optimization approach to explain cortical functional architecture raises the
following questions: i) What are the genuine ground states of candidate energy
functionals and how can they be calculated with precision and rigor? ii) How do
differences in candidate optimization principles impact on the predicted map
structure and conversely what can be learned about an hypothetical underlying
optimization principle from observations on map structure? iii) Is there a way
to analyze the coordinated organization of cortical maps predicted by
optimization principles in general? To answer these questions we developed a
general dynamical systems approach to the combined optimization of visual
cortical maps of OP and another scalar feature such as OD or spatial frequency
preference.Comment: 90 pages, 16 figure
Coordinated optimization of visual cortical maps (II) Numerical studies
It is an attractive hypothesis that the spatial structure of visual cortical
architecture can be explained by the coordinated optimization of multiple
visual cortical maps representing orientation preference (OP), ocular dominance
(OD), spatial frequency, or direction preference. In part (I) of this study we
defined a class of analytically tractable coordinated optimization models and
solved representative examples in which a spatially complex organization of the
orientation preference map is induced by inter-map interactions. We found that
attractor solutions near symmetry breaking threshold predict a highly ordered
map layout and require a substantial OD bias for OP pinwheel stabilization.
Here we examine in numerical simulations whether such models exhibit
biologically more realistic spatially irregular solutions at a finite distance
from threshold and when transients towards attractor states are considered. We
also examine whether model behavior qualitatively changes when the spatial
periodicities of the two maps are detuned and when considering more than 2
feature dimensions. Our numerical results support the view that neither minimal
energy states nor intermediate transient states of our coordinated optimization
models successfully explain the spatially irregular architecture of the visual
cortex. We discuss several alternative scenarios and additional factors that
may improve the agreement between model solutions and biological observations.Comment: 55 pages, 11 figures. arXiv admin note: substantial text overlap with
arXiv:1102.335
Minimization of phonon-tunneling dissipation in mechanical resonators
Micro- and nanoscale mechanical resonators have recently emerged as
ubiquitous devices for use in advanced technological applications, for example
in mobile communications and inertial sensors, and as novel tools for
fundamental scientific endeavors. Their performance is in many cases limited by
the deleterious effects of mechanical damping. Here, we report a significant
advancement towards understanding and controlling support-induced losses in
generic mechanical resonators. We begin by introducing an efficient numerical
solver, based on the "phonon-tunneling" approach, capable of predicting the
design-limited damping of high-quality mechanical resonators. Further, through
careful device engineering, we isolate support-induced losses and perform the
first rigorous experimental test of the strong geometric dependence of this
loss mechanism. Our results are in excellent agreement with theory,
demonstrating the predictive power of our approach. In combination with recent
progress on complementary dissipation mechanisms, our phonon-tunneling solver
represents a major step towards accurate prediction of the mechanical quality
factor.Comment: 12 pages, 4 figure
Recommended from our members
Efficient propagation of systematic uncertainties from calibration to analysis with the SnowStorm method in IceCube
Efficient treatment of systematic uncertainties that depend on a large number of nuisance parameters is a persistent difficulty in particle physics and astrophysics experiments. Where low-level effects are not amenable to simple parameterization or re-weighting, analyses often rely on discrete simulation sets to quantify the effects of nuisance parameters on key analysis observables. Such methods may become computationally untenable for analyses requiring high statistics Monte Carlo with a large number of nuisance degrees of freedom, especially in cases where these degrees of freedom parameterize the shape of a continuous distribution. In this paper we present a method for treating systematic uncertainties in a computationally efficient and comprehensive manner using a single simulation set with multiple and continuously varied nuisance parameters. This method is demonstrated for the case of the depth-dependent effective dust distribution within the IceCube Neutrino Telescope
Recommended from our members
Design and performance of the first IceAct demonstrator at the South Pole
In this paper we describe the first results of IceAct, a compact imaging air-Cherenkov telescope operating in coincidence with the IceCube Neutrino Observatory (IceCube) at the geographic South Pole. An array of IceAct telescopes (referred to as the IceAct project) is under consideration as part of the IceCube-Gen2 extension to IceCube. Surface detectors in general will be a powerful tool in IceCube-Gen2 for distinguishing astrophysical neutrinos from the dominant backgrounds of cosmic-ray induced atmospheric muons and neutrinos: the IceTop array is already in place as part of IceCube, but has a high energy threshold. Although the duty cycle will be lower for the IceAct telescopes than the present IceTop tanks, the IceAct telescopes may prove to be more effective at lowering the detection threshold for air showers. Additionally, small imaging air-Cherenkov telescopes in combination with IceTop, the deep IceCube detector or other future detector systems might improve measurements of the composition of the cosmic ray energy spectrum. In this paper we present measurements of a first 7-pixel imaging air Cherenkov telescope demonstrator, proving the capability of this technology to measure air showers at the South Pole in coincidence with IceTop and the deep IceCube detector
Recommended from our members
Time-Integrated Neutrino Source Searches with 10 Years of IceCube Data.
This Letter presents the results from pointlike neutrino source searches using ten years of IceCube data collected between April 6, 2008 and July 10, 2018. We evaluate the significance of an astrophysical signal from a pointlike source looking for an excess of clustered neutrino events with energies typically above ∼1 TeV among the background of atmospheric muons and neutrinos. We perform a full-sky scan, a search within a selected source catalog, a catalog population study, and three stacked Galactic catalog searches. The most significant point in the northern hemisphere from scanning the sky is coincident with the Seyfert II galaxy NGC 1068, which was included in the source catalog search. The excess at the coordinates of NGC 1068 is inconsistent with background expectations at the level of 2.9σ after accounting for statistical trials from the entire catalog. The combination of this result along with excesses observed at the coordinates of three other sources, including TXS 0506+056, suggests that, collectively, correlations with sources in the northern catalog are inconsistent with background at 3.3σ significance. The southern catalog is consistent with background. These results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector
- …
