2,045 research outputs found
Deaths attributable to diabetes in the United States: comparison of data sources and estimation approaches
OBJECTIVE: The goal of this research was to identify the fraction of deaths attributable to diabetes in the United States.
RESEARCH DESIGN AND METHODS: We estimated population attributable fractions (PAF) for cohorts aged 30±84 who were surveyed in the National Health Interview Survey (NHIS) between 1997 and 2009 (N = 282,322) and in the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2010 (N = 21,814). Cohort members were followed prospectively for mortality through 2011. We identified diabetes status using self-reported diagnoses in both NHIS and NHANES and using HbA1c in NHANES. Hazard ratios associated with diabetes were estimated using Cox model adjusted for age, sex, race/ethnicity, educational attainment, and smoking status.
RESULTS: We found a high degree of consistency between data sets and definitions of diabetes in the hazard ratios, estimates of diabetes prevalence, and estimates of the proportion of deaths attributable to diabetes. The proportion of deaths attributable to diabetes was estimated to be 11.5% using self-reports in NHIS, 11.7% using self-reports in NHANES, and 11.8% using HbA1c in NHANES. Among the sub-groups that we examined, the PAF was highest among obese persons at 19.4%. The proportion of deaths in which diabetes was assigned as the underlying cause of death (3.3±3.7%) severely understated the contribution of diabetes to mortality in the United States.
CONCLUSIONS: Diabetes may represent a more prominent factor in American mortality than is commonly appreciated, reinforcing the need for robust population-level interventions aimed at diabetes prevention and care
Rediscovering Renaissance Recipes: Digital Presentation for a 16th Century Text
This project seeks to create a web-based system for working with a French text from 1509, Platine en francoys, which has been transcribed into an XML (Extensible Markup Language)-based file format using the conventions of TEI. Through incorporation of several web technologies such as NodeJS, the application provides a section by section navigation capability that allows versions of the text to be seen in multiple configurations. The options include a side-by-side presentation mode that allows for easy compare/contrast of the original versus a regularized spelling or other variants. A facsimile-focused view is also planned, along with tools leveraging the specialized markup and focused searches on non-recipe text, recipes, and ingredients. It is expected that these features will allow for a deeper understanding of the text as well as function as a foundation for future development work as part of the ongoing cross-disciplinary computing/language collaboration efforts
Improved classification for compositional data using the -transformation
In compositional data analysis an observation is a vector containing
non-negative values, only the relative sizes of which are considered to be of
interest. Without loss of generality, a compositional vector can be taken to be
a vector of proportions that sum to one. Data of this type arise in many areas
including geology, archaeology, biology, economics and political science. In
this paper we investigate methods for classification of compositional data. Our
approach centres on the idea of using the -transformation to transform
the data and then to classify the transformed data via regularised discriminant
analysis and the k-nearest neighbours algorithm. Using the
-transformation generalises two rival approaches in compositional data
analysis, one (when ) that treats the data as though they were
Euclidean, ignoring the compositional constraint, and another (when )
that employs Aitchison's centred log-ratio transformation. A numerical study
with several real datasets shows that whether using or
gives better classification performance depends on the dataset, and moreover
that using an intermediate value of can sometimes give better
performance than using either 1 or 0.Comment: This is a 17-page preprint and has been accepted for publication at
the Journal of Classificatio
A data-based power transformation for compositional data
Compositional data analysis is carried out either by neglecting the
compositional constraint and applying standard multivariate data analysis, or
by transforming the data using the logs of the ratios of the components. In
this work we examine a more general transformation which includes both
approaches as special cases. It is a power transformation and involves a single
parameter, {\alpha}. The transformation has two equivalent versions. The first
is the stay-in-the-simplex version, which is the power transformation as
defined by Aitchison in 1986. The second version, which is a linear
transformation of the power transformation, is a Box-Cox type transformation.
We discuss a parametric way of estimating the value of {\alpha}, which is
maximization of its profile likelihood (assuming multivariate normality of the
transformed data) and the equivalence between the two versions is exhibited.
Other ways include maximization of the correct classification probability in
discriminant analysis and maximization of the pseudo R-squared (as defined by
Aitchison in 1986) in linear regression. We examine the relationship between
the {\alpha}-transformation, the raw data approach and the isometric log-ratio
transformation. Furthermore, we also define a suitable family of metrics
corresponding to the family of {\alpha}-transformation and consider the
corresponding family of Frechet means.Comment: Published in the proceddings of the 4th international workshop on
Compositional Data Analysis.
http://congress.cimne.com/codawork11/frontal/default.as
Saddlepoint approximations for the normalizing constant of Fisher–Bingham distributions on products of spheres and Stiefel manifolds
In an earlier paper Kume & Wood (2005) showed how the normalizing constant of the Fisher–
Bingham distribution on a sphere can be approximated with high accuracy using a univariate saddlepoint
density approximation. In this sequel, we extend the approach to a more general setting
and derive saddlepoint approximations for the normalizing constants of multicomponent Fisher–
Bingham distributions on Cartesian products of spheres, and Fisher–Bingham distributions on
Stiefel manifolds. In each case, the approximation for the normalizing constant is essentially
a multivariate saddlepoint density approximation for the joint distribution of a set of quadratic
forms in normal variables. Both first-order and second-order saddlepoint approximations are considered.
Computational algorithms, numerical results and theoretical properties of the approximations
are presented. In the challenging high-dimensional settings considered in this paper the
saddlepoint approximations perform very well in all examples considered.
Some key words: Directional data; Fisher matrix distribution; Kent distribution; Orientation statistics
Characterization of Deficiencies in the Frequency Domain Forced Response Analysis Technique for Supersonic Turbine Bladed Disks
Turbine blades in rocket and jet engine turbomachinery experience enormous harmonic loading conditions. These loads result from the integer number of upstream and downstream stator vanes as well as the other turbine stages. Assessing the blade structural integrity is a complex task requiring an initial characterization of whether resonance is possible and then performing a forced response analysis if that condition is met. The standard technique for forced response analysis in rocket engines is to decompose a CFD-generated flow field into its harmonic components, and to then perform a frequency response analysis at the problematic natural frequencies. Recent CFD analysis and water-flow testing at NASA/MSFC, though, indicates that this technique may miss substantial harmonic and non-harmonic excitation sources that become present in complex flows. A substantial effort has been made to account for this denser spatial Fourier content in frequency response analysis (described in another paper by the author), but the question still remains whether the frequency response analysis itself is capable of capturing the excitation content sufficiently. Two studies comparing frequency response analysis with transient response analysis, therefore, of bladed-disks undergoing this complex flow environment have been performed. The first is of a bladed disk with each blade modeled by simple beam elements. Six loading cases were generated by varying a baseline harmonic excitation in different ways based upon cold-flow testing from Heritage Fuel Air Turbine Test. It was hypothesized that the randomness and other variation from the standard harmonic excitation would reduce the blade structural response, but the results showed little reduction. The second study was of a realistic model of a bladed-disk excited by the same CFD used in the J2X engine program. It was hypothesized that enforcing periodicity in the CFD (inherent in the frequency response technique) would overestimate the response. The results instead showed that the transient analysis results were up to 10% higher for "clean" nodal diameter excitations and six times larger for "messy" excitations, where substantial Fourier content around the main harmonic exists. Because the bulk of resonance problems are due to the "clean" excitations, a 10% underprediction is not necessarily a problem, especially since the average response in the transient is similar to the frequency response result, and so in a realistic finite life calculation, the life would be same. However, in the rare cases when the "messy" excitations harmonics are identified as the source of potential resonance concerns, this research does indicate that frequency response analysis is inadequate for accurate characterization of blade structural capability
Metal-poor stars towards the Galactic bulge:a population potpourri
We present a comprehensive chemical abundance analysis of five red giants and two horizontal branch (HB) stars towards the southern edge of the Galactic bulge, at (l, b) ~ (0°,−11°). Based on high-resolution spectroscopy obtained with the Magellan/MIKE spectrograph, we derived up to 23 chemical element abundances and identify a mixed bag of stars, representing various populations in the central regions of the Galaxy. Although cosmological simulations predict that the inner Galaxy was host to the first stars in the Universe, we see no chemical evidence of the ensuing massive supernova explosions: all of our targets exhibit halo-like, solar [Sc/Fe] ratios, which is in contrast to the low values predicted from Population III nucleosynthesis. One of the targets is a CEMP-s star at [Fe/H] = −2.52 dex, and another target is a moderately metal-poor ([Fe/H] = −1.53 dex) CH star with strong enrichment in s-process elements (e.g., [Ba/Fe] = 1.35). These individuals provide the first contenders of these classes of stars towards the bulge. Four of the carbon-normal stars exhibit abundance patterns reminiscent of halo star across a metallicity range spanning −2.0 to −2.6 dex, i.e., enhanced α-elements and solar Fe-peak and neutron-capture elements, and the remaining one is a regular metal-rich bulge giant. The position, distance, and radial velocity of one of the metal-poor HB stars coincides with simulations of the old trailing arm of the disrupted Sagittarius dwarf galaxy. While their highly uncertain proper motions prohibit a clear kinematic separation, the stars’ chemical abundances and distances suggest that these metal-poor candidates, albeit located towards the bulge, are not of the bulge, but rather inner halo stars on orbits that make them pass through the central regions. Thus, we caution similar claims of detections of metal-poor stars as true habitants of the bulge
Application of Additively Manufactured Components in Rocket Engine Turbopumps
The use of additive manufacturing technology has the potential to revolutionize the development of turbopump components in liquid rocket engines. When designing turbomachinery with the additive process there are several benefits and risks that are leveraged relative to a traditional development cycle. This topic explores the details and development of a 90,000 RPM Liquid Hydrogen Turbopump from which 90% of the parts were derived from the additive process. This turbopump was designed, developed and will be tested later this year at Marshall Space Flight Center
- …
