408 research outputs found

    Outcomes Associated With Oral Anticoagulants Plus Antiplatelets in Patients With Newly Diagnosed Atrial Fibrillation.

    Get PDF
    Importance: Patients with nonvalvular atrial fibrillation at risk of stroke should receive oral anticoagulants (OAC). However, approximately 1 in 8 patients in the Global Anticoagulant Registry in the Field (GARFIELD-AF) registry are treated with antiplatelet (AP) drugs in addition to OAC, with or without documented vascular disease or other indications for AP therapy. Objective: To investigate baseline characteristics and outcomes of patients who were prescribed OAC plus AP therapy vs OAC alone. Design, Setting, and Participants: Prospective cohort study of the GARFIELD-AF registry, an international, multicenter, observational study of adults aged 18 years and older with recently diagnosed nonvalvular atrial fibrillation and at least 1 risk factor for stroke enrolled between March 2010 and August 2016. Data were extracted for analysis in October 2017 and analyzed from April 2018 to June 2019. Exposure: Participants received either OAC plus AP or OAC alone. Main Outcomes and Measures: Clinical outcomes were measured over 3 and 12 months. Outcomes were adjusted for 40 covariates, including baseline conditions and medications. Results: A total of 24 436 patients (13 438 [55.0%] male; median [interquartile range] age, 71 [64-78] years) were analyzed. Among eligible patients, those receiving OAC plus AP therapy had a greater prevalence of cardiovascular indications for AP, including acute coronary syndromes (22.0% vs 4.3%), coronary artery disease (39.1% vs 9.8%), and carotid occlusive disease (4.8% vs 2.0%). Over 1 year, patients treated with OAC plus AP had significantly higher incidence rates of stroke (adjusted hazard ratio [aHR], 1.49; 95% CI, 1.01-2.20) and any bleeding event (aHR, 1.41; 95% CI, 1.17-1.70) than those treated with OAC alone. These patients did not show evidence of reduced all-cause mortality (aHR, 1.22; 95% CI, 0.98-1.51). Risk of acute coronary syndrome was not reduced in patients taking OAC plus AP compared with OAC alone (aHR, 1.16; 95% CI, 0.70-1.94). Patients treated with OAC plus AP also had higher rates of all clinical outcomes than those treated with OAC alone over the short term (3 months). Conclusions and Relevance: This study challenges the practice of coprescribing OAC plus AP unless there is a clear indication for adding AP to OAC therapy in newly diagnosed atrial fibrillation

    Evolving antithrombotic treatment patterns in patients with newly diagnosed atrial fibrillation: UK findings from the GARFIELD-AF registry

    Get PDF
    There has been a longstanding problem of suboptimal use of anticoagulation in patients with atrial fibrillation (AF), however there is limited evidence relating to the period following the commencement of non-vitamin K antagonist oral anticoagulants (NOACs) in the UK. Using UK data from the GARFIELD_AF registry, we investigated the evolving pattern of antithrombotic therapy in newly diagnosed AF patients with ≥1 additional risk factor for stroke

    Screening for atrial fibrillation – a cross-sectional survey of healthcare professionals in primary care

    Get PDF
    Introduction: Screening for atrial fibrillation (AF) in primary care has been recommended; however, the views of healthcare professionals (HCPs) are not known. This study aimed to determine the opinions of HCP about the feasibility of implementing screening within a primary care setting. Methods: A cross-sectional mixed methods census survey of 418 HCPs from 59 inner-city practices (Nottingham, UK) was conducted between October-December 2014. Postal and web-surveys ascertained data on existing methods, knowledge, skills, attitudes, barriers and facilitators to AF screening using Likert scale and open-ended questions. Responses, categorized according to HCP group, were summarized using proportions, adjusting for clustering by practice, with 95% C.Is and free-text responses using thematic analysis. Results: At least one General Practitioner (GP) responded from 48 (81%) practices. There were 212/418 (51%) respondents; 118/229 GPs, 67/129 nurses [50 practice nurses; 17 Nurse Practitioners (NPs)], 27/60 healthcare assistants (HCAs). 39/48 (81%) practices had an ECG machine and diagnosed AF in-house. Non-GP HCPs reported having less knowledge about ECG interpretation, diagnosing and treating AF than GPs. A greater proportion of non-GP HCPs reported they would benefit from ECG training specifically for AF diagnosis than GPs [proportion (95% CI) GPs: 11.9% (6.8–20.0); HCAs: 37.0% (21.7–55.5); nurses: 44.0% (30.0–59.0); NPs 41.2% (21.9–63.7)]. Barriers included time, workload and capacity to undertake screening activities, although training to diagnose and manage AF was a required facilitator. Conclusion: Inner-city general practices were found to have adequate access to resources for AF screening. There is enthusiasm by non-GP HCPs to up-skill in the diagnosis and management of AF and they may have a role in future AF screening. However, organisational barriers, such as lack of time, staff and capacity, should be overcome for AF screening to be feasibly implemented within primary care

    Accuracy of Vitalograph lung monitor as a screening test for COPD in primary care.

    Get PDF
    Microspirometry may be useful as the second stage of a screening pathway among patients reporting respiratory symptoms. We assessed sensitivity and specificity of the Vitalograph® lung monitor compared with post-bronchodilator confirmatory spirometry (ndd Easy on-PC) among primary care chronic obstructive pulmonary disease (COPD) patients within the Birmingham COPD cohort. We report a case-control analysis within 71 general practices in the UK. Eligible patients were aged ≥40 years who were either on a clinical COPD register or reported chronic respiratory symptoms on a questionnaire. Participants performed pre- and post-bronchodilator microspirometry, prior to confirmatory spirometry. Out of the 544 participants, COPD was confirmed in 337 according to post-bronchodilator confirmatory spirometry. Pre-bronchodilator, using the LLN as a cut-point, the lung monitor had a sensitivity of 50.5% (95% CI 45.0%, 55.9%) and a specificity of 99.0% (95% CI 96.6%, 99.9%) in our sample. Using a fixed ratio of FEV1/FEV6 < 0.7 to define obstruction in the lung monitor, sensitivity increased (58.8%; 95% CI 53.0, 63.8) while specificity was virtually identical (98.6%; 95% CI 95.8, 99.7). Within our sample, the optimal cut-point for the lung monitor was FEV1/FEV6 < 0.78, with sensitivity of 82.8% (95% CI 78.3%, 86.7%) and specificity of 85.0% (95% CI 79.4%, 89.6%). Test performance of the lung monitor was unaffected by bronchodilation. The lung monitor could be used in primary care without a bronchodilator using a simple ratio of FEV1/FEV6 as part of a screening pathway for COPD among patients reporting respiratory symptoms

    SMART: Self-Management of Anticoagulation, a Randomised Trial [ISRCTN19313375]

    Get PDF
    Background: Oral anticoagulation monitoring has traditionally taken place in secondary care because of the need for a laboratory blood test, the international normalised ratio (INR). The development of reliable near patient testing (NPT) systems for INR estimation has facilitated devolution of testing to primary care. Patient self-management is a logical progression from the primary care model. This study will be the first to randomise non-selected patients in primary care, to either self-management or standard care. Method: The study was a multi-centred randomised controlled trial with patients from 49 general practices recruited. Those suitable for inclusion were aged 18 or over, with a long term indication for oral anticoagulation, who had taken warfarin for at least six months. Patients randomised to the intervention arm attended at least two training sessions which were practice-based, 1 week apart. Each patient was assessed on their capability to undertake self management. If considered capable, they were given a near patient INR testing monitor, test strips and quality control material for home testing. Patients managed their own anticoagulation for a period of 12 months and performed their INR test every 2 weeks. Control patients continued with their pre-study care either attending hospital or practice based anticoagulant clinics. Discussion: The methodology used in this trial will overcome concerns from previous trials of selection bias and relevance to the UK health service. The study will give a clearer understanding of the benefits of self-management in terms of clinical and cost effectiveness and patient preference

    The impact of physical activity on fatigue and quality of life in lung cancer patients: a randomised controlled trial protocol

    Get PDF
    Background: People with lung cancer have substantial symptom burden and more unmet needs than the general cancer population. Physical activity (PA) has been shown to positively influence quality of life (QOL), fatigue and daily functioning in the curative treatment of people with breast and colorectal cancers and lung diseases, as well as in palliative settings. A randomised controlled trial (RCT) is needed to determine if lung cancer patients benefit from structured PA intervention. The Physical Activity in Lung Cancer (PAL) trial is designed to evaluate the impact of a 2-month PA intervention on fatigue and QOL in patients with non-resectable lung cancer. Biological mechanisms will also be studied.Methods/design: A multi-centre RCT with patients randomised to usual care or a 2-month PA programme, involving supervised PA sessions including a behavioural change component and home-based PA. QOL questionnaires, disease and functional status and body composition will be assessed at baseline, 2, 4 and 6 months follow-up. The primary endpoint is comparative levels of fatigue between the 2 arms. Secondary endpoints include: QOL, functional abilities and physical function. Exploratory endpoints include: anxiety, depression, distress, dyspnoea, PA behaviour, fitness, hospitalisations, survival, cytokines and insulin-like growth factor levels.Discussion: This study will provide high-level evidence of the effect of PA programmes on cancer-related fatigue and QOL in patients with advanced lung cancer. If positive, the study has the potential to change care for people with cancer using a simple, inexpensive intervention to improve their QOL and help them maintain independent function for as long as possible.Trial registration: Australian New Zealand Clinical Trials Registry No. ACTRN12609000971235. © 2012 Dhillon et al.; licensee BioMed Central Ltd

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)

    Global and national Burden of diseases and injuries among children and adolescents between 1990 and 2013

    Get PDF
    Importance The literature focuses on mortality among children younger than 5 years. Comparable information on nonfatal health outcomes among these children and the fatal and nonfatal burden of diseases and injuries among older children and adolescents is scarce. Objective To determine levels and trends in the fatal and nonfatal burden of diseases and injuries among younger children (aged <5 years), older children (aged 5-9 years), and adolescents (aged 10-19 years) between 1990 and 2013 in 188 countries from the Global Burden of Disease (GBD) 2013 study. Evidence Review Data from vital registration, verbal autopsy studies, maternal and child death surveillance, and other sources covering 14 244 site-years (ie, years of cause of death data by geography) from 1980 through 2013 were used to estimate cause-specific mortality. Data from 35 620 epidemiological sources were used to estimate the prevalence of the diseases and sequelae in the GBD 2013 study. Cause-specific mortality for most causes was estimated using the Cause of Death Ensemble Model strategy. For some infectious diseases (eg, HIV infection/AIDS, measles, hepatitis B) where the disease process is complex or the cause of death data were insufficient or unavailable, we used natural history models. For most nonfatal health outcomes, DisMod-MR 2.0, a Bayesian metaregression tool, was used to meta-analyze the epidemiological data to generate prevalence estimates. Findings Of the 7.7 (95% uncertainty interval [UI], 7.4-8.1) million deaths among children and adolescents globally in 2013, 6.28 million occurred among younger children, 0.48 million among older children, and 0.97 million among adolescents. In 2013, the leading causes of death were lower respiratory tract infections among younger children (905 059 deaths; 95% UI, 810 304-998 125), diarrheal diseases among older children (38 325 deaths; 95% UI, 30 365-47 678), and road injuries among adolescents (115 186 deaths; 95% UI, 105 185-124 870). Iron deficiency anemia was the leading cause of years lived with disability among children and adolescents, affecting 619 (95% UI, 618-621) million in 2013. Large between-country variations exist in mortality from leading causes among children and adolescents. Countries with rapid declines in all-cause mortality between 1990 and 2013 also experienced large declines in most leading causes of death, whereas countries with the slowest declines had stagnant or increasing trends in the leading causes of death. In 2013, Nigeria had a 12% global share of deaths from lower respiratory tract infections and a 38% global share of deaths from malaria. India had 33% of the world’s deaths from neonatal encephalopathy. Half of the world’s diarrheal deaths among children and adolescents occurred in just 5 countries: India, Democratic Republic of the Congo, Pakistan, Nigeria, and Ethiopia. Conclusions and Relevance Understanding the levels and trends of the leading causes of death and disability among children and adolescents is critical to guide investment and inform policies. Monitoring these trends over time is also key to understanding where interventions are having an impact. Proven interventions exist to prevent or treat the leading causes of unnecessary death and disability among children and adolescents. The findings presented here show that these are underused and give guidance to policy makers in countries where more attention is needed

    Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017 : a systematic analysis for the Global Burden of Disease Study 2017

    Get PDF
    Background: The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk outcome pairs, and new data on risk exposure levels and risk outcome associations. Methods: We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017. Findings: In 2017,34.1 million (95% uncertainty interval [UI] 33.3-35.0) deaths and 121 billion (144-1.28) DALYs were attributable to GBD risk factors. Globally, 61.0% (59.6-62.4) of deaths and 48.3% (46.3-50.2) of DALYs were attributed to the GBD 2017 risk factors. When ranked by risk-attributable DALYs, high systolic blood pressure (SBP) was the leading risk factor, accounting for 10.4 million (9.39-11.5) deaths and 218 million (198-237) DALYs, followed by smoking (7.10 million [6.83-7.37] deaths and 182 million [173-193] DALYs), high fasting plasma glucose (6.53 million [5.23-8.23] deaths and 171 million [144-201] DALYs), high body-mass index (BMI; 4.72 million [2.99-6.70] deaths and 148 million [98.6-202] DALYs), and short gestation for birthweight (1.43 million [1.36-1.51] deaths and 139 million [131-147] DALYs). In total, risk-attributable DALYs declined by 4.9% (3.3-6.5) between 2007 and 2017. In the absence of demographic changes (ie, population growth and ageing), changes in risk exposure and risk-deleted DALYs would have led to a 23.5% decline in DALYs during that period. Conversely, in the absence of changes in risk exposure and risk-deleted DALYs, demographic changes would have led to an 18.6% increase in DALYs during that period. The ratios of observed risk exposure levels to exposure levels expected based on SDI (O/E ratios) increased globally for unsafe drinking water and household air pollution between 1990 and 2017. This result suggests that development is occurring more rapidly than are changes in the underlying risk structure in a population. Conversely, nearly universal declines in O/E ratios for smoking and alcohol use indicate that, for a given SDI, exposure to these risks is declining. In 2017, the leading Level 4 risk factor for age-standardised DALY rates was high SBP in four super-regions: central Europe, eastern Europe, and central Asia; north Africa and Middle East; south Asia; and southeast Asia, east Asia, and Oceania. The leading risk factor in the high-income super-region was smoking, in Latin America and Caribbean was high BMI, and in sub-Saharan Africa was unsafe sex. O/E ratios for unsafe sex in sub-Saharan Africa were notably high, and those for alcohol use in north Africa and the Middle East were notably low. Interpretation: By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning
    corecore