3,387 research outputs found

    Effect of a doctor working during the festive period on population health:natural experiment using 60 years of Doctor Who episodes (the TARDIS study)

    Get PDF
    OBJECTIVE: To examine the effect of a (fictional) doctor working during the festive period on population health.DESIGN: Natural experiment.SETTING: England, Wales, and the UK.MAIN OUTCOME MEASURES: Age standardised annual mortality rates in England, Wales, and the UK from 1963, when the BBC first broadcast Doctor Who, a fictional programme with a character called the Doctor who fights villains and intervenes to save others while travelling through space and time. Mortality rates were modelled in a time series analysis accounting for non-linear trends over time, and associations were estimated in relation to a new Doctor Who episode broadcast during the previous festive period, 24 December to 1 January. An interrupted time series analysis modelled the shift in mortality rates from 2005, when festive episodes of Doctor Who could be classed as a yearly Christmas intervention.RESULTS: 31 festive periods from 1963 have featured a new Doctor Who episode, including 14 broadcast on Christmas Day. In time series analyses, an association was found between broadcasts during the festive period and subsequent lower annual mortality rates. In particular, episodes shown on Christmas Day were associated with 0.60 fewer deaths per 1000 person years (95% confidence interval 0.21 to 0.99; P=0.003) in England and Wales and 0.40 fewer deaths per 1000 person years (0.08 to 0.73; P=0.02) in the UK. The interrupted time series analysis showed a strong shift (reduction) in mortality rates from 2005 onwards in association with the Doctor Who Christmas intervention, with a mean 0.73 fewer deaths per 1000 person years (0.21 to 1.26; P=0.01) in England and Wales and a mean 0.62 fewer deaths per 1000 person years (0.16 to 1.09; P=0.01) in the UK.CONCLUSIONS: A new Doctor Who episode shown every festive period, especially on Christmas Day, was associated with reduced mortality rates in England, Wales, and the UK, suggesting that a doctor working over the festive period could lower mortality rates. This finding reinforces why healthcare provision should not be taken for granted and may prompt the BBC and Disney+ to televise new episodes of Doctor Who every festive period, ideally on Christmas Day.</p

    Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples

    Get PDF
    Organisations such as the National Institute for Health and Care Excellence require the synthesis of evidence from existing studies to inform their decisions—for example, about the best available treatments with respect to multiple efficacy and safety outcomes. However, relevant studies may not provide direct evidence about all the treatments or outcomes of interest. Multivariate and network meta-analysis methods provide a framework to address this, using correlated or indirect evidence from such studies alongside any direct evidence. In this article, the authors describe the key concepts and assumptions of these methods, outline how correlated and indirect evidence arises, and illustrate the contribution of such evidence in real clinical examples involving multiple outcomes and multiple treatment

    2003 Manifesto on the California Electricity Crisis

    Get PDF
    The authors, an ad-hocgroup of professionals with experience in regulatory and energy economics, share a common concern with the continuing turmoil facing the electricity industry ("the industry") in California. Most ofthe authorsendorsed the first California Electricity Manifesto issued on January 25, 2001. Almost two years have passed since that first Manifesto. While wholesale electric prices have moderated and California no longer faces the risk of blackouts, in many ways the industry is in worse shape now than it was at the start of 2001. As a result, the group of signatories continues to have a deep concern with the conflicting policy directions being pursued for the industry at both the State and Federal levels of government and the impact the uncertainties associated with these conflicting policies will have, long term, on the economy of California. Theauthorshave once again convened under the auspices of the Institute of Management, Innovation and Organization at the University of California, Berkeley, to put forward ourtheir ideas on a basic set of necessary policies to move the industry forward for the benefit of all Californians and the nation. The authors point out that theydo not pretend to be "representative." They do bring, however, a very diverse range of backgrounds and expertise.Technology and Industry, Regulatory Reform

    Stability of clinical prediction models developed using statistical or machine learning methods

    Get PDF
    Clinical prediction models estimate an individual's risk of a particular health outcome. A developed model is a consequence of the development dataset and model‐building strategy, including the sample size, number of predictors, and analysis method (e.g., regression or machine learning). We raise the concern that many models are developed using small datasets that lead to instability in the model and its predictions (estimated risks). We define four levels of model stability in estimated risks moving from the overall mean to the individual level. Through simulation and case studies of statistical and machine learning approaches, we show instability in a model's estimated risks is often considerable, and ultimately manifests itself as miscalibration of predictions in new data. Therefore, we recommend researchers always examine instability at the model development stage and propose instability plots and measures to do so. This entails repeating the model‐building steps (those used to develop the original prediction model) in each of multiple (e.g., 1000) bootstrap samples, to produce multiple bootstrap models, and deriving (i) a prediction instability plot of bootstrap model versus original model predictions; (ii) the mean absolute prediction error (mean absolute difference between individuals’ original and bootstrap model predictions), and (iii) calibration, classification, and decision curve instability plots of bootstrap models applied in the original sample. A case study illustrates how these instability assessments help reassure (or not) whether model predictions are likely to be reliable (or not), while informing a model's critical appraisal (risk of bias rating), fairness, and further validation requirements

    Stability of clinical prediction models developed using statistical or machine learning methods

    Get PDF
    Clinical prediction models estimate an individual's risk of a particular health outcome. A developed model is a consequence of the development dataset and model-building strategy, including the sample size, number of predictors, and analysis method (e.g., regression or machine learning). We raise the concern that many models are developed using small datasets that lead to instability in the model and its predictions (estimated risks). We define four levels of model stability in estimated risks moving from the overall mean to the individual level. Through simulation and case studies of statistical and machine learning approaches, we show instability in a model's estimated risks is often considerable, and ultimately manifests itself as miscalibration of predictions in new data. Therefore, we recommend researchers always examine instability at the model development stage and propose instability plots and measures to do so. This entails repeating the model-building steps (those used to develop the original prediction model) in each of multiple (e.g., 1000) bootstrap samples, to produce multiple bootstrap models, and deriving (i) a prediction instability plot of bootstrap model versus original model predictions; (ii) the mean absolute prediction error (mean absolute difference between individuals’ original and bootstrap model predictions), and (iii) calibration, classification, and decision curve instability plots of bootstrap models applied in the original sample. A case study illustrates how these instability assessments help reassure (or not) whether model predictions are likely to be reliable (or not), while informing a model's critical appraisal (risk of bias rating), fairness, and further validation requirements

    Development and validation of risk prediction model for venous thromboembolism in postpartum women: multinational cohort study

    Get PDF
    Objective: To develop and validate a risk prediction model for venous thromboembolism in the first six weeks after delivery (early postpartum). Design: Cohort study. Setting: Records from England based Clinical Practice Research Datalink (CPRD) linked to Hospital Episode Statistics (HES) and data from Sweden based registry. Participants: All pregnant women registered with CPRD-HES linked data between 1997 and 2014 and Swedish medical birth registry between 2005 and 2011 with postpartum follow-up. Main outcome measure: Multivariable logistic regression analysis was used to develop a risk prediction model for postpartum venous thromboembolism based on the English data, which was externally validated in the Swedish data. Results: 433 353 deliveries were identified in the English cohort and 662 387 in the Swedish cohort. The absolute rate of venous thromboembolism was 7.2 per 10 000 deliveries in the English cohort and 7.9 per 10 000 in the Swedish cohort. Emergency caesarean delivery, stillbirth, varicose veins, pre-eclampsia/eclampsia, postpartum infection, and comorbidities were the strongest predictors of venous thromboembolism in the final multivariable model. Discrimination of the model was similar in both cohorts, with a C statistic above 0.70, with excellent calibration of observed and predicted risks. The model identified more venous thromboembolism events than the existing national English (sensitivity 68% v 63%) and Swedish guidelines (30% v 21%) at similar thresholds. Conclusion: A new prediction model that quantifies absolute risk of postpartum venous thromboembolism has been developed and externally validated. It is based on clinical variables that are available in many developed countries at the point of delivery and could serve as the basis for real time decisions on obstetric thromboprophylaxis

    The science of clinical practice: disease diagnosis or patient prognosis? Evidence about "what is likely to happen" should shape clinical practice.

    Get PDF
    BACKGROUND: Diagnosis is the traditional basis for decision-making in clinical practice. Evidence is often lacking about future benefits and harms of these decisions for patients diagnosed with and without disease. We propose that a model of clinical practice focused on patient prognosis and predicting the likelihood of future outcomes may be more useful. DISCUSSION: Disease diagnosis can provide crucial information for clinical decisions that influence outcome in serious acute illness. However, the central role of diagnosis in clinical practice is challenged by evidence that it does not always benefit patients and that factors other than disease are important in determining patient outcome. The concept of disease as a dichotomous 'yes' or 'no' is challenged by the frequent use of diagnostic indicators with continuous distributions, such as blood sugar, which are better understood as contributing information about the probability of a patient's future outcome. Moreover, many illnesses, such as chronic fatigue, cannot usefully be labelled from a disease-diagnosis perspective. In such cases, a prognostic model provides an alternative framework for clinical practice that extends beyond disease and diagnosis and incorporates a wide range of information to predict future patient outcomes and to guide decisions to improve them. Such information embraces non-disease factors and genetic and other biomarkers which influence outcome. SUMMARY: Patient prognosis can provide the framework for modern clinical practice to integrate information from the expanding biological, social, and clinical database for more effective and efficient care

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Development and external validation of the eFalls tool:a multivariable prediction model for the risk of ED attendance or hospitalisation with a fall or fracture in older adults

    Get PDF
    BackgroundFalls are common in older adults and can devastate personal independence through injury such as fracture and fear of future falls. Methods to identify people for falls prevention interventions are currently limited, with high risks of bias in published prediction models. We have developed and externally validated the eFalls prediction model using routinely collected primary care electronic health records (EHR) to predict risk of emergency department attendance/hospitalisation with fall or fracture within 1 year.MethodsData comprised two independent, retrospective cohorts of adults aged ≥65 years: the population of Wales, from the Secure Anonymised Information Linkage Databank (model development); the population of Bradford and Airedale, England, from Connected Bradford (external validation). Predictors included electronic frailty index components, supplemented with variables informed by literature reviews and clinical expertise. Fall/fracture risk was modelled using multivariable logistic regression with a Least Absolute Shrinkage and Selection Operator penalty. Predictive performance was assessed through calibration, discrimination and clinical utility. Apparent, internal–external cross-validation and external validation performance were assessed across general practices and in clinically relevant subgroups.ResultsThe model’s discrimination performance (c-statistic) was 0.72 (95% confidence interval, CI: 0.68 to 0.76) on internal–external cross-validation and 0.82 (95% CI: 0.80 to 0.83) on external validation. Calibration was variable across practices, with some over-prediction in the validation population (calibration-in-the-large, −0.87; 95% CI: −0.96 to −0.78). Clinical utility on external validation was improved after recalibration.ConclusionThe eFalls prediction model shows good performance and could support proactive stratification for falls prevention services if appropriately embedded into primary care EHR systems
    corecore