285 research outputs found
Identifying signals of potentially harmful medications in pregnancy: use of the double false discovery rate method to adjust for multiple testing.
AIMS: Surveillance of medication use in pregnancy is essential to identify associations between first trimester medications and congenital anomalies (CAs). Medications in the same Anatomical Therapeutic Chemical classes may have similar effects. We aimed to use this information to improve the detection of potential teratogens in CA surveillance data. METHODS: Data on 15 058 malformed fetuses with first trimester medication exposures from 1995-2011 were available from EUROmediCAT, a network of European CA registries. For each medication-CA combination, the proportion of the CA in fetuses with the medication was compared to the proportion of the CA in all other fetuses in the dataset. The Australian classification system was used to identify high-risk medications in order to compare two methods of controlling the false discovery rate (FDR): a single FDR applied across all combinations, and a double FDR incorporating groupings of medications. RESULTS: There were 28 765 potential combinations (523 medications × 55 CAs) for analysis. An FDR cut-off of 50% resulted in a reasonable effective workload, for which single FDR gave rise to eight medication signals (three high-risk medications) and double FDR 50% identified 16 signals (six high-risk). Over a range of FDR cut-offs, double FDR identified more high-risk medications as signals, for comparable effective workloads. CONCLUSIONS: The double FDR method appears to improve the detection of potential teratogens in comparison to the single FDR, while maintaining a low risk of false positives. Use of double FDR is recommended in routine signal detection analyses of CA data
Efficacy and safety of ablation for people with non-paroxysmal atrial fibrillation.
: The optimal rhythm management strategy for people with non-paroxysmal (persistent or long-standing persistent) atrial fibrilation is currently not well defined. Antiarrhythmic drugs have been the mainstay of therapy. But recently, in people who have not responded to antiarrhythmic drugs, the use of ablation (catheter and surgical) has emerged as an alternative to maintain sinus rhythm to avoid long-term atrial fibrillation complications. However, evidence from randomised trials about the efficacy and safety of ablation in non-paroxysmal atrial fibrillation is limited. : To determine the efficacy and safety of ablation (catheter and surgical) in people with non-paroxysmal (persistent or long-standing persistent) atrial fibrillation compared to antiarrhythmic drugs. : We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE Ovid, Embase Ovid, conference abstracts, clinical trial registries, and Health Technology Assessment Database. We searched these databases from their inception to 1 April 2016. We used no language restrictions. : We included randomised trials evaluating the effect of radiofrequency catheter ablation (RFCA) or surgical ablation compared with antiarrhythmic drugs in adults with non-paroxysmal atrial fibrillation, regardless of any concomitant underlying heart disease, with at least 12 months of follow-up. : Two review authors independently selected studies and extracted data. We evaluated risk of bias using the Cochrane 'Risk of bias' tool. We calculated risk ratios (RRs) for dichotomous data with 95% confidence intervals (CIs) a using fixed-effect model when heterogeneity was low (I² <= 40%) and a random-effects model when heterogeneity was moderate or substantial (I² > 40%). Using the GRADE approach, we evaluated the quality of the evidence and used the GRADE profiler (GRADEpro) to import data from Review Manager 5 to create 'Summary of findings' tables. : We included three randomised trials with 261 participants (mean age: 60 years) comparing RFCA (159 participants) to antiarrhythmic drugs (102) for non-paroxysmal atrial fibrillation. We generally assessed the included studies as having low or unclear risk of bias across multiple domains, with reported outcomes generally lacking precision due to low event rates. Evidence showed that RFCA was superior to antiarrhythmic drugs in achieving freedom from atrial arrhythmias (RR 1.84, 95% CI 1.17 to 2.88; 3 studies, 261 participants; low-quality evidence), reducing the need for cardioversion (RR 0.62, 95% CI 0.47 to 0.82; 3 studies, 261 participants; moderate-quality evidence), and reducing cardiac-related hospitalisation (RR 0.27, 95% CI 0.10 to 0.72; 2 studies, 216 participants; low-quality evidence) at 12 months follow-up. There was substantial uncertainty surrounding the effect of RFCA regarding significant bradycardia (or need for a pacemaker) (RR 0.20, 95% CI 0.02 to 1.63; 3 studies, 261 participants; low-quality evidence), periprocedural complications, and other safety outcomes (RR 0.94, 95% CI 0.16 to 5.68; 3 studies, 261 participants; very low-quality evidence). : In people with non-paroxysmal atrial fibrillation, evidence suggests a superiority of RFCA to antiarrhythmic drugs in achieving freedom from atrial arrhythmias, reducing the need for cardioversion, and reducing cardiac-related hospitalisations. There was uncertainty surrounding the effect of RFCA with significant bradycardia (or need for a pacemaker), periprocedural complications, and other safety outcomes. Evidence should be interpreted with caution, as event rates were low and quality of evidence ranged from moderate to very low.<br/
Evaluation of a community-based hypertension improvement program (ComHIP) in Ghana: data from a baseline survey
BACKGROUND: Ghana faces an increasing burden of non-communicable disease with rates of hypertension estimated as high as 36% in adults. Despite these high rates, hypertension control remains very poor in Ghana (4%). The current project aims to implement and evaluate a community-based programme to raise awareness, and to improve treatment and control of hypertension in the Eastern Region of Ghana. In this paper, we present the findings of the baseline cross-sectional survey focusing on hypertension prevalence, awareness, treatment, and control. METHODS: To evaluate the ComHIP project, a quasi-experimental design consisted of a before and after evaluations are being implemented in the intervention and comparison districts. A cohort study component is being implemented in the intervention district to assess hypertension control. Background anthropometric and clinical data collected as part of the baseline survey were analyzed in STATA Version 11. We examined the characteristics of individuals, associated with the baseline study outcomes using logistic regression models. RESULTS: We interviewed 2400 respondents (1200 each from the comparison and intervention districts), although final sample sizes after data cleaning were 1170 participants in the comparison district and 1167 in the intervention district. With the exception of ethnicity, the control and intervention districts compare favorably. Overall 32.4% of the study respondents were hypertensive (31.4% in the control site; and 33.4% in the intervention site); 46.2% of hypertensive individuals were aware of a previous diagnosis of hypertension (44.7% in the control site, and 47.7% in the intervention site), and only around 9% of these were being treated in either arm. Hypertension control was 1.3% overall (0.5% in the comparison site, and 2.1% in the intervention site). Age was a predictor of having hypertension, and so was increasing body mass index (BMI), waist, and hip circumferences. After adjusting for age, the risk factors with the greatest association with hypertension were being overweight (aOR = 2.30; 95% CI 1.53-3.46) or obese (aOR = 3.61; 95% CI 2.37-5.51). Older individuals were more likely to be aware of their hypertension status than younger people. After adjusting for age people with a family history of hypertension or CVD, or having an unhealthy waist hip ratio, were more likely to be aware of their hypertension status. CONCLUSIONS: The high burden of hypertension among the studied population, coupled with high awareness, yet very low level of hypertension treatment and control requires in-depth investigation of the bottlenecks to treatment and control. The low hypertension treatment and control rates despite current and previous general educational programs particularly in the intervention district, may suggest that such programs are not necessarily impactful on the health of the population
ASCORE: an up-to-date cardiovascular risk score for hypertensive patients reflecting contemporary clinical practice developed using the (ASCOT-BPLA) trial data.
A number of risk scores already exist to predict cardiovascular (CV) events. However, scores developed with data collected some time ago might not accurately predict the CV risk of contemporary hypertensive patients that benefit from more modern treatments and management. Using data from the randomised clinical trial Anglo-Scandinavian Cardiac Outcomes Trial-BPLA, with 15 955 hypertensive patients without previous CV disease receiving contemporary preventive CV management, we developed a new risk score predicting the 5-year risk of a first CV event (CV death, myocardial infarction or stroke). Cox proportional hazard models were used to develop a risk equation from baseline predictors. The final risk model (ASCORE) included age, sex, smoking, diabetes, previous blood pressure (BP) treatment, systolic BP, total cholesterol, high-density lipoprotein-cholesterol, fasting glucose and creatinine baseline variables. A simplified model (ASCORE-S) excluding laboratory variables was also derived. Both models showed very good internal validity. User-friendly integer score tables are reported for both models. Applying the latest Framingham risk score to our data significantly overpredicted the observed 5-year risk of the composite CV outcome. We conclude that risk scores derived using older databases (such as Framingham) may overestimate the CV risk of patients receiving current BP treatments; therefore, 'updated' risk scores are needed for current patients
Feasibility of evaluation of the natural history of kidney disease in the general population using electronic healthcare records
Background: Knowledge about the nature of long-term changes in kidney function in the general population is sparse. We aim to identify whether primary care electronic healthcare records capture sufficient information to study the natural history of kidney disease. /
Methods: The National Chronic Kidney Disease Audit database covers ∼14% of the population of England and Wales. Availability of repeat serum creatinine tests was evaluated by risk factors for chronic kidney disease (CKD) and individual changes over time in estimated glomerular filtration rate (eGFR) were estimated using linear regression. Sensitivity of estimation to method of evaluation of eGFR compared laboratory-reported eGFR and recalculated eGFR (using laboratory-reported creatinine), to uncover any impact of historical creatinine calibration issues on slope estimation. /
Results: Twenty-five per cent of all adults, 92% of diabetics and 96% of those with confirmed CKD had at least three creatinine tests, spanning a median of 5.7 years, 6.2 years and 6.1 years, respectively. Median changes in laboratory-reported eGFR (mL/min/1.73 m2/year) were −1.32 (CKD) and −0.60 (diabetes). Median changes in recalculated eGFR were −0.98 (CKD) and −0.11 (diabetes), underestimating decline. Magnitude of underestimation (and between-patient variation in magnitude) decreased with deteriorating eGFR. For CKD Stages 3, 4 and 5 (at latest eGFR), median slopes were −1.27, −2.49 and -3.87 for laboratory-reported eGFR and −0.89, −2.26 and −3.75 for recalculated eGFR. /
Conclusions: Evaluation of long-term changes in renal function will be possible in those at greatest risk if methods are identified to overcome creatinine calibration problems. Bias will be reduced by focussing on patients with confirmed CKD
Development of a composite outcome score for a complex intervention - measuring the impact of Community Health Workers.
BACKGROUND: In health services research, composite scores to measure changes in health-seeking behaviour and uptake of services do not exist. We describe the rationale and analytical considerations for a composite primary outcome for primary care research. We simulate its use in a large hypothetical population and use it to calculate sample sizes. We apply it within the context of a proposed cluster randomised controlled trial (RCT) of a Community Health Worker (CHW) intervention. METHODS: We define the outcome as the proportion of the services (immunizations, screening tests, stop-smoking clinics) received by household members, of those that they were eligible to receive. First, we simulated a population household structure (by age and sex), based on household composition data from the 2011 England and Wales census. The ratio of eligible to received services was calculated for each simulated household based on published eligibility criteria and service uptake rates, and was used to calculate sample size scenarios for a cluster RCT of a CHW intervention. We assume varying intervention percentage effects and varying levels of clustering. RESULTS: Assuming no disease risk factor clustering at the household level, 11.7% of households in the hypothetical population of 20,000 households were eligible for no services, 26.4% for 1, 20.7% for 2, 15.3% for 3 and 25.8% for 4 or more. To demonstrate a small CHW intervention percentage effect (10% improvement in uptake of services out of those who would not otherwise have taken them up, and additionally assuming intra-class correlation of 0.01 between households served by different CHWs), around 4,000 households would be needed in each of the intervention and control arms. This equates to 40 CHWs (each servicing 100 households) needed in the intervention arm. If the CHWs were more effective (20%), then only 170 households would be needed in each of the intervention and control arms. CONCLUSIONS: This is a useful first step towards a process-centred composite score of practical value in complex community-based interventions. Firstly, it is likely to result in increased statistical power compared with multiple outcomes. Second, it avoids over-emphasis of any single outcome from a complex intervention
Why we are losing the war against COVID-19 on the data front and how to reverse the situation
With over five million covid-19 positive cases declared, more than 30,000 deaths and more than two million patients recovered, we would expect that the highly digitalised health systems of the high-income countries would have collected, processed and analysed large quantities of clinical data from COVID-19 patients. Those analysis should have served to answer important clinical questions such as: what are the risk factors for becoming infected? What are good clinical variables to predict prognosis? What kind of patients are more likely to survive mechanical ventilation? Are there clinical sub-phenotypes of the disease? All these, and many more, are crucial questions to improve our clinical strategies against the epidemic and save as many lives as possible until we find a vaccine and effective treatments. One might assume that in the era of Big Data and Machine Learning there would be an army of scientist crunching petabytes of clinical data to solve these questions. However, nothing further from the truth. Our health systems have proven completely unprepared to generate in a timely manner a flow of clinical data that could feed these analyses. De-spite gigabytes of data being generated every day, the vast immensity is locked in secure hospitals data servers and is not being made available for analysis. Routinely collected clinical data is, by and large, regarded as a tool to inform about individual patients, and not as a key resource to answer clinical questions thorough statistical analysis. The ini-tiatives to extract COVID-19 clinical data are often promoted by private groups of indi-viduals and not by the health systems. They are uncoordinated and inefficient. The con-sequence is that we have more clinical data than in any other epidemic in history, but we are failing to analyse it quickly enough to make a difference. In this paper we expose this situation and we suggest concrete ideas that the health systems could implement to dynamically analyse their routine clinical data becoming effectively “learning health systems” and reversing the current situation
The atmospheric science of JEM-EUSO
An Atmospheric Monitoring System (AMS) is critical suite of instruments for JEM-EUSO whose aim is to detect Ultra-High Energy Cosmic Rays (UHECR) and (EHECR) from Space. The AMS
comprises an advanced space qualified infrared camera and a LIDAR with cross checks provided by a ground-based and airborne Global Light System Stations. Moreover the Slow Data Mode of JEM-EUSO has been proven crucial for the UV background analysis by comparing the UV and IR images. It will also contribute to the investigation of atmospheric effects seen in the data from the GLS or even to our understanding of Space Weather
The Efficacy and Duration of Protection of Pneumococcal Conjugate Vaccines Against Nasopharyngeal Carriage: A Meta-Regression Model.
BACKGROUND: Pneumococcal conjugate vaccines (PCVs) reduce disease largely through their impact on nasopharyngeal (NP) carriage acquisition of Streptococcus pneumoniae, a precondition for developing any form of pneumococcal disease. We aimed to estimate the vaccine efficacy (VEC) and duration of protection of PCVs against S. pneumoniae carriage acquisition through meta-regression models. METHODS: We identified intervention studies providing NP carriage estimates among vaccinated and unvaccinated children at any time after completion of a full vaccination schedule. We calculated VEC for PCV7 serotypes, grouped as well as individually, and explored cross-protective efficacy against 6A. Efficacy estimates over time were obtained using a Bayesian meta-logistic regression approach, with time since completion of vaccination as a covariate. RESULTS: We used data from 22 carriage surveys (15 independent studies) from 5 to 64 months after the last PCV dose, including 14,298 children. The aggregate VEC for all PCV7 serotypes 6 months after completion of the vaccination schedule was 57% (95% credible interval: 50-65%), varying by serotype from 38% (19F) to 80%. Our model provides evidence of sustained protection of PCVs for several years, with an aggregate VEC of 42% (95% credible interval: 19-54%) at 5 years, although the waning differed between serotypes. We also found evidence of cross-protection against 6A, with a VEC of 39% 6 months after a complete schedule, decreasing to 0 within 5 years postvaccination. CONCLUSION: Our results suggest that PCVs confer reasonable protection against acquisition of pneumococcal carriage of the 7 studied serotypes, for several years after vaccination, albeit with differences across serotypes.<br/
- …
