78 research outputs found
Effect of a telephonic alert system (Healthy Outlook) for patients with chronic obstructive pulmonary disease: cohort study with matched controls
Background: Healthy Outlook was a telephonic alert system for patients with Chronic Obstructive
Pulmonary Disease (COPD) in the United Kingdom. It used routine meteorological and communicable
disease reports to identify times of increased risk to health. We tested its effect on hospital use and
mortality.
Methods: Enrolees with a history of hospital admissions were linked to hospital administrative data.
They were compared with control patients from local general practices, matched for demographic
characteristics, health conditions, previous hospital use and predictive risk scores. We compared
unplanned hospital admissions, admissions for COPD, outpatient attendances, planned admissions and
mortality, over 12 months following enrolment.
Results: Intervention and matched control groups appeared similar at baseline (n=1,413 in each group).
Over the 12 months following enrolment, Healthy Outlook enrolees experienced more COPD admissions
than matched controls (adjusted rate ratio 1.26, 95% CI, 1.05 to 1.52) and more outpatient attendances
(adjusted rate ratio 1.08, 95% CI 1.03 to 1.12). Enrolees also had lower mortality rates over 12 months
(adjusted odds ratio 0.61, 95% CI, 0.45 to 0.84).
Conclusion: Healthy Outlook did not reduce admission rates, though mortality rates were lower. Findings
for hospital utilisation were unlikely to have been affected by confounding
A comparison of alternative strategies for choosing control populations in observational studies.
Various approaches have been used to select control groups in observational studies: (1) from within the intervention area; (2) from a convenience sample, or randomly chosen areas; (3) from areas matched on area-level characteristics; and (4) nationally. The consequences of the decision are rarely assessed but, as we show, it can have complex impacts on confounding at both the area and individual levels. We began by reanalyzing data collected for an evaluation of a rapid response service on rates of unplanned hospital admission. Balance on observed individual-level variables was better with external than local controls, after matching. Further, when important prognostic variables were omitted from the matching algorithm, imbalances on those variables were also minimized using external controls. Treatment effects varied markedly depending on the choice of control area, but in the case study the variation was minimal after adjusting for the characteristics of areas. We used simulations to assess relative bias and means-squared error, as this could not be done in the case study. A particular feature of the simulations was unexplained variation in the outcome between areas. We found that the likely impact of unexplained variation for hospital admissions dwarfed the benefits of better balance on individual-level variables, leading us to prefer local controls in this instance. In other scenarios, in which there was less unexplained variation in the outcome between areas, bias and mean-squared error were optimized using external controls. We identify some general considerations relevant to the choice of control population in observational studies
A comprehensive evaluation of the impact of telemonitoring in patients with long-term conditions and social care needs: protocol for the whole systems demonstrator cluster randomised trial
Background: It is expected that increased demands on services will result from expanding numbers of older people with long-term conditions and social care needs. There is significant interest in the potential for technology to reduce utilisation of health services in these patient populations, including telecare (the remote, automatic and passive monitoring of changes in an individual's condition or lifestyle) and telehealth (the remote exchange of data between a patient and health care professional). The potential of telehealth and telecare technology to improve care and reduce costs is limited by a lack of rigorous evidence of actual impact. Methods/Design: We are conducting a large scale, multi-site study of the implementation, impact and acceptability of these new technologies. A major part of the evaluation is a cluster-randomised controlled trial of telehealth and telecare versus usual care in patients with long-term conditions or social care needs. The trial involves a number of outcomes, including health care utilisation and quality of life. We describe the broad evaluation and the methods of the cluster randomised trial Discussion: If telehealth and telecare technology proves effective, it will provide additional options for health services worldwide to deliver care for populations with high levels of need
Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice
Recommended from our members
Effect of telehealth on glycaemic control: analysis of patients with type 2 diabetes in the Whole Systems Demonstrator cluster randomised trial
Background: The Whole Systems Demonstrator was a large, pragmatic, cluster randomised trial that compared telehealth with usual care among 3,230 patients with long-term conditions in three areas of England. Telehealth involved the regular transmission of physiological information such as blood glucose to health professionals working remotely. We examined whether telehealth led to changes in glycosylated haemoglobin (HbA1c) among the subset of patients with type 2 diabetes.
Methods: The general practice electronic medical record was used as the source of information on HbA1c. Effects on HbA1c were assessed using a repeated measures model that included all HbA1c readings recorded during the 12-month trial period, and adjusted for differences in HbA1c readings recorded before recruitment. Secondary analysis averaged multiple HbA1c readings recorded for each individual during the trial period.
Results: 513 of the 3,230 participants were identified as having type 2 diabetes and thus were included in the study. Telehealth was associated with lower HbA1c than usual care during the trial period (difference 0.21% or 2.3 mmol/mol, 95% CI, 0.04% to 0.38%, p = 0.013). Among the 457 patients in the secondary analysis, mean HbA1c showed little change for controls following recruitment, but fell for intervention patients from 8.38% to 8.15% (68 to 66 mmol/mol). A higher proportion of intervention patients than controls had HbA1c below the 7.5% (58 mmol/mol) threshold that was targeted by general practices (30.4% vs. 38.0%). This difference, however, did not quite reach statistical significance (adjusted odds ratio 1.63, 95% CI, 0.99 to 2.68, p = 0.053).
Conclusions: Telehealth modestly improved glycaemic control in patients with type 2 diabetes over 12 months. The scale of the improvements is consistent with previous meta-analyses, but was relatively modest and seems unlikely to produce significant patient benefit
The implications of high bed occupancy rates on readmission rates in England: a longitudinal study
Hospital bed occupancy rates in the English National Health Service have risen to levels considered clinically unsafe. This study assesses the association of increased bed occupancy with changes in the percentage of overnight patients discharged from hospital on a given day, and their subsequent 30-day readmission rate. Longitudinal panel data methods are used to analyse secondary care records (n = 4,193,590) for 136 non-specialist Trusts between April 2014 and February 2016. The average bed occupancy rate across the study period was 90.4%. A 1% increase in bed occupancy was associated with a 0.49% rise in the discharge rate, and a 0.011% increase in the 30-day readmission rate for discharged patients. These associations became more pronounced once bed occupancy exceeded 95%. When bed occupancy rates were high, hospitals discharged a greater proportion of their patients. Those were mostly younger and less clinically complex, suggesting that hospitals are successfully prioritising early discharge amongst least vulnerable patients. However, while increased bed occupancy was not associated with a substantial increase in overall 30-day readmission rates, the relationship was more pronounced in older and sicker patients, indicating possible links with short-fallings in discharge processes
Recommended from our members
Cost-effectiveness of telecare for people with social care needs: the Whole Systems Demonstrator cluster randomised trial
Purpose of the study: to examine the costs and cost-effectiveness of ‘second-generation’ telecare, in addition to standard support and care that could include ‘first-generation’ forms of telecare, compared with standard support and care that could include ‘first-generation’ forms of telecare.
Design and methods: a pragmatic cluster-randomised controlled trial with nested economic evaluation. A total of 2,600 people with social care needs participated in a trial of community-based telecare in three English local authority areas. In the Whole Systems Demonstrator Telecare Questionnaire Study, 550 participants were randomised to intervention and 639 to control. Participants who were offered the telecare intervention received a package of equipment and monitoring services for 12 months, additional to their standard health and social care services. The control group received usual health and social care.
Primary outcome measure: incremental cost per quality-adjusted life year (QALY) gained. The analyses took a health and social care perspective.
Results: cost per additional QALY was £297,000. Cost-effectiveness acceptability curves indicated that the probability of costeffectiveness at a willingness-to-pay of £30,000 per QALY gained was only 16%. Sensitivity analyses combining variations in equipment price and support cost parameters yielded a cost-effectiveness ratio of £161,000 per QALY.
Implications: while QALY gain in the intervention group was similar to that for controls, social and health services costs were higher. Second-generation telecare did not appear to be a cost-effective addition to usual care, assuming a commonly accepted willingness to pay for QALYs
Recommended from our members
A deep learning approach for staging embryonic tissue isolates with small data
Machine learning approaches are becoming increasingly widespread and are now present in most areas of research. Their recent surge can be explained in part due to our ability to generate and store enormous amounts of data with which to train these models. The requirement for large training sets is also responsible for limiting further potential applications of machine learning, particularly in fields where data tend to be scarce such as developmental biology. However, recent research seems to indicate that machine learning and Big Data can sometimes be decoupled to train models with modest amounts of data. In this work we set out to train a CNN-based classifier to stage zebrafish tail buds at four different stages of development using small information-rich data sets. Our results show that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression confocal microscopy images, achieving in each case up to 100% test accuracy scores. Importantly, we show that high accuracy can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net. Furthermore, our classifier shows that it is possible to stage isolated embryonic structures without the need to refer to classic developmental landmarks in the whole embryo, which will be particularly useful to stage 3D culture in vitro systems such as organoids. We hope that this work will provide a proof of principle that will help dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to also apply ML approaches
A deep learning approach for staging embryonic tissue isolates with small data
Machine learning approaches are becoming increasingly widespread and are now present in most areas of research. Their recent surge can be explained in part due to our ability to generate and store enormous amounts of data with which to train these models. The requirement for large training sets is also responsible for limiting further potential applications of machine learning, particularly in fields where data tend to be scarce such as developmental biology. However, recent research seems to indicate that machine learning and Big Data can sometimes be decoupled to train models with modest amounts of data. In this work we set out to train a CNN-based classifier to stage zebrafish tail buds at four different stages of development using small information-rich data sets. Our results show that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression confocal microscopy images, achieving in each case up to 100% test accuracy scores. Importantly, we show that high accuracy can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net. Furthermore, our classifier shows that it is possible to stage isolated embryonic structures without the need to refer to classic developmental landmarks in the whole embryo, which will be particularly useful to stage 3D culture in vitro systems such as organoids. We hope that this work will provide a proof of principle that will help dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to also apply ML approaches
- …
