88 research outputs found
Recommended from our members
Creation of an Interactive Dashboard to Facilitate Early Detection of Cardiac Amyloidosis in African American Veterans
Background Cardiac amyloidosis (CA) is an underdiagnosed cause of heart failure (HF) that disproportionately impacts men of African descent. Without a standardized method of screening and scattered patient health information, clinicians must integrate data that spans multiple disease systems and is stored across the electronic health record.Objectives The aim of this project was to create a dashboard to facilitate identification of high-risk African American (AA) veterans who would benefit from CA screening tests. This paper described the development of the dashboard and identified barriers and opportunities in dashboard development.Methods Three Veterans Affairs (VA) health systems participated in this project. Microsoft Structured Query Language (SQL) Report Builder was utilized to create an interactive dashboard that refreshes daily through stored procedures using SQL Server Integration Services and the SQL Server Job Agent. Inclusion criteria included AA patients less than 90 years old with a history of HF. The 2023 American College of Cardiology/American Heart Association consensus statement on diagnosis and treatment of transthyretin CA was the source of evidence in creating the inclusion criteria and parameters of interest.Results The final dashboard contained 1,732 HF patients who met inclusion criteria, of which 949 (55%) were identified as high risk. We faced several challenges in this project, including time required for dashboard development, limited team experience in specifying dashboard requirements, identifying informatics counterparts at all sites, and standardizing data across three VA hospitals.Conclusion In this clinical improvement project, we created a dashboard that identifies AA veterans with HF at risk for CA and that can help to mitigate the impact of CA on this population
D-meson semileptonic decays to pseudoscalars from four-flavor lattice QCD
We present lattice-QCD calculations of the hadronic form factors for the
semileptonic decays , , and .
Our calculation uses the highly improved staggered quark (HISQ) action for all
valence and sea quarks and includes MILC ensembles with lattice
spacings ranging from fm down to fm. At most lattice
spacings, an ensemble with physical-mass light quarks is included. The HISQ
action allows all the quarks to be treated with the same relativistic
light-quark action, allowing for nonperturbative renormalization using partial
conservation of the vector current. We combine our results with experimental
measurements of the differential decay rates to determine
and This result for is
the most precise to date, with a lattice-QCD error that is, for the first time
for the semileptonic extraction, at the same level as the experimental error.
Using recent measurements from BES III, we also give the first-ever
determination of from . Our results also furnish new
Standard Model calculations of the lepton flavor universality ratios
, , and , which are consistent within with experimental
measurements. Our extractions of and , when combined with
a value for , provide the most precise test of second-row CKM
unitarity, finding agreement with unitarity at the level of one standard
deviation.Comment: 92 page
The anomalous magnetic moment of the muon in the Standard Model
194 pages, 103 figures, bib files for the citation references are available from: https://muon-gm2-theory.illinois.eduWe review the present status of the Standard Model calculation of the anomalous magnetic moment of the muon. This is performed in a perturbative expansion in the fine-structure constant and is broken down into pure QED, electroweak, and hadronic contributions. The pure QED contribution is by far the largest and has been evaluated up to and including with negligible numerical uncertainty. The electroweak contribution is suppressed by and only shows up at the level of the seventh significant digit. It has been evaluated up to two loops and is known to better than one percent. Hadronic contributions are the most difficult to calculate and are responsible for almost all of the theoretical uncertainty. The leading hadronic contribution appears at and is due to hadronic vacuum polarization, whereas at the hadronic light-by-light scattering contribution appears. Given the low characteristic scale of this observable, these contributions have to be calculated with nonperturbative methods, in particular, dispersion relations and the lattice approach to QCD. The largest part of this review is dedicated to a detailed account of recent efforts to improve the calculation of these two contributions with either a data-driven, dispersive approach, or a first-principle, lattice-QCD approach. The final result reads and is smaller than the Brookhaven measurement by 3.7. The experimental uncertainty will soon be reduced by up to a factor four by the new experiment currently running at Fermilab, and also by the future J-PARC experiment. This and the prospects to further reduce the theoretical uncertainty in the near future-which are also discussed here-make this quantity one of the most promising places to look for evidence of new physics
PB2420 EFFICIENCY OF DUAL ANTIPLATELET THERAPY AFTER PERCUTANEOUS CORONARY INTERVENTION
Applied Veterinary Informatics: Development of a Semantic and Domain-Specific Method to Construct a Canine Data Repository
AbstractAnimals are used to study the pathogenesis of various human diseases, but typically as animal models with induced disease. However, companion animals develop disease spontaneously in a way that mirrors disease development in humans. The purpose of this study is to develop a semantic and domain-specific method to enable construction of a data repository from a veterinary hospital that would be useful for future studies. We developed a two-phase method that combines semantic and domain-specific approaches to construct a canine data repository of clinical data collected during routine care at the Matthew J Ryan Veterinary Hospital of the University of Pennsylvania (PennVet). Our framework consists of two phases: (1) a semantic data-cleaning phase and (2) a domain-specific data-cleaning phase. We validated our data repository using a gold standard of known breed predispositions for certain diseases (i.e., mitral valve disease, atrial fibrillation and osteosarcoma). Our two-phase method allowed us to maximize data retention (99.8% of data retained), while ensuring the quality of our result. Our final population contained 84,405 dogs treated between 2000 and 2017 from 194 distinct dog breeds. We observed the expected breed associations with mitral valve disease, atrial fibrillation, and osteosarcoma (P < 0.05) after adjusting for multiple comparisons. Precision ranged from 60.0 to 83.3 for the three diseases (avg. 74.2) and recall ranged from 31.6 to 83.3 (avg. 53.3). Our study describes a two-phase method to construct a clinical data repository using canine data obtained during routine clinical care at a veterinary hospital.</jats:p
- …
