131 research outputs found
Patient access to complex chronic disease records on the internet
Background: Access to medical records on the Internet has been reported to be acceptable and popular with patients, although most published evaluations have been of primary care or office-based practice. We tested the feasibility and acceptability of making unscreened results and data from a complex chronic disease pathway (renal medicine) available to patients over the Internet in a project involving more than half of renal units in the UK.
Methods: Content and presentation of the Renal PatientView (RPV) system was developed with patient groups. It was designed to receive information from multiple local information systems and to require minimal extra work in units. After piloting in 4 centres in 2005 it was made available more widely. Opinions were sought from both patients who enrolled and from those who did not in a paper survey, and from staff in an electronic survey. Anonymous data on enrolments and usage were extracted from the webserver.
Results: By mid 2011 over 17,000 patients from 47 of the 75 renal units in the UK had registered. Users had a wide age range (<10 to >90 yrs) but were younger and had more years of education than non-users. They were enthusiastic about the concept, found it easy to use, and 80% felt it gave them a better understanding of their disease. The most common reason for not enrolling was being unaware of the system. A minority of patients had security concerns, and these were reduced after enrolling.
Staff responses were also strongly positive. They reported that it aided patient concordance and disease management, and increased the quality of consultations with a neutral effect on consultation length. Neither patient nor staff responses suggested that RPV led to an overall increase in patient anxiety or to an increased burden on renal units beyond the time required to enrol each patient.
Conclusions: Patient Internet access to secondary care records concerning a complex chronic disease is feasible and popular, providing an increased sense of empowerment and understanding, with no serious identified negative consequences. Security concerns were present but rarely prevented participation. These are powerful reasons to make this type of access more widely available
Acute kidney disease and renal recovery : consensus report of the Acute Disease Quality Initiative (ADQI) 16 Workgroup
Consensus definitions have been reached for both acute kidney injury (AKI) and chronic kidney disease (CKD) and these definitions are now routinely used in research and clinical practice. The KDIGO guideline defines AKI as an abrupt decrease in kidney function occurring over 7 days or less, whereas CKD is defined by the persistence of kidney disease for a period of > 90 days. AKI and CKD are increasingly recognized as related entities and in some instances probably represent a continuum of the disease process. For patients in whom pathophysiologic processes are ongoing, the term acute kidney disease (AKD) has been proposed to define the course of disease after AKI; however, definitions of AKD and strategies for the management of patients with AKD are not currently available. In this consensus statement, the Acute Disease Quality Initiative (ADQI) proposes definitions, staging criteria for AKD, and strategies for the management of affected patients. We also make recommendations for areas of future research, which aim to improve understanding of the underlying processes and improve outcomes for patients with AKD
The predictive value of early behavioural assessments in pet dogs: a longitudinal study from neonates to adults
Studies on behavioural development in domestic dogs are of relevance for matching puppies with the right families, identifying predispositions for behavioural problems at an early stage, and predicting suitability for service dog work, police or military service. The literature is, however, inconsistent regarding the predictive value of tests performed during the socialisation period. Additionally, some practitioners use tests with neonates to complement later assessments for selecting puppies as working dogs, but these have not been validated. We here present longitudinal data on a cohort of Border collies, followed up from neonate age until adulthood. A neonate test was conducted with 99 Border collie puppies aged 2–10 days to assess activity, vocalisations when isolated and sucking force. At the age of 40–50 days, 134 puppies (including 93 tested as neonates) were tested in a puppy test at their breeders' homes. All dogs were adopted as pet dogs and 50 of them participated in a behavioural test at the age of 1.5 to 2 years with their owners. Linear mixed models found little correspondence between individuals' behaviour in the neonate, puppy and adult test. Exploratory activity was the only behaviour that was significantly correlated between the puppy and the adult test. We conclude that the predictive validity of early tests for predicting specific behavioural traits in adult pet dogs is limited
The role of informal dimensions of safety in high-volume organisational routines:an ethnographic study of test results handling in UK general practice
Abstract Background The handling of laboratory, imaging and other test results in UK general practice is a high-volume organisational routine that is both complex and high risk. Previous research in this area has focused on errors and harm, but a complementary approach is to better understand how safety is achieved in everyday practice. This paper ethnographically examines the role of informal dimensions of test results handling routines in the achievement of safety in UK general practice and how these findings can best be developed for wider application by policymakers and practitioners. Methods Non-participant observation was conducted of high-volume organisational routines across eight UK general practices with diverse organisational characteristics. Sixty-two semi-structured interviews were also conducted with the key practice staff alongside the analysis of relevant documents. Results While formal results handling routines were described similarly across the eight study practices, the everyday structure of how the routine should be enacted in practice was informally understood. Results handling safety took a range of local forms depending on how different aspects of safety were prioritised, with practices varying in terms of how they balanced thoroughness (i.e. ensuring the high-quality management of results by the most appropriate clinician) and efficiency (i.e. timely management of results) depending on a range of factors (e.g. practice history, team composition). Each approach adopted created its own potential risks, with demands for thoroughness reducing productivity and demands for efficiency reducing handling quality. Irrespective of the practice-level approach adopted, staff also regularly varied what they did for individual patients depending on the specific context (e.g. type of result, patient circumstances). Conclusions General practices variably prioritised a legitimate range of results handling safety processes and outcomes, each with differing strengths and trade-offs. Future safety improvement interventions should focus on how to maximise practice-level knowledge and understanding of the range of context-specific approaches available and the safeties and risks inherent in each within the context of wider complex system conditions and interactions. This in turn has the potential to inform new kinds of proactive, contextually appropriate approaches to intervention development and implementation focusing on the enhanced deliberation of the safety of existing high-volume routines
Genetic variation and exercise-induced muscle damage: implications for athletic performance, injury and ageing.
Prolonged unaccustomed exercise involving muscle lengthening (eccentric) actions can result in ultrastructural muscle disruption, impaired excitation-contraction coupling, inflammation and muscle protein degradation. This process is associated with delayed onset muscle soreness and is referred to as exercise-induced muscle damage. Although a certain amount of muscle damage may be necessary for adaptation to occur, excessive damage or inadequate recovery from exercise-induced muscle damage can increase injury risk, particularly in older individuals, who experience more damage and require longer to recover from muscle damaging exercise than younger adults. Furthermore, it is apparent that inter-individual variation exists in the response to exercise-induced muscle damage, and there is evidence that genetic variability may play a key role. Although this area of research is in its infancy, certain gene variations, or polymorphisms have been associated with exercise-induced muscle damage (i.e. individuals with certain genotypes experience greater muscle damage, and require longer recovery, following strenuous exercise). These polymorphisms include ACTN3 (R577X, rs1815739), TNF (-308 G>A, rs1800629), IL6 (-174 G>C, rs1800795), and IGF2 (ApaI, 17200 G>A, rs680). Knowing how someone is likely to respond to a particular type of exercise could help coaches/practitioners individualise the exercise training of their athletes/patients, thus maximising recovery and adaptation, while reducing overload-associated injury risk. The purpose of this review is to provide a critical analysis of the literature concerning gene polymorphisms associated with exercise-induced muscle damage, both in young and older individuals, and to highlight the potential mechanisms underpinning these associations, thus providing a better understanding of exercise-induced muscle damage
Risk of aortic aneurysm or dissection following use of fluoroquinolones: a retrospective multinational network cohort study
Background:
Fluoroquinolones (FQs) are commonly used to treat urinary tract infections (UTIs), but some studies have suggested they may increase the risk of aortic aneurysm or dissection (AA/AD). However, no large-scale international study has thoroughly assessed this risk.
//
Methods:
A retrospective cohort study was conducted using a large, distributed network analysis across 14 databases from 5 countries (United States, South Korea, Japan, Taiwan, and Australia). The study included 13,588,837 patients aged 35 or older who initiated systemic fluoroquinolones (FQs) or comparable antibiotics (trimethoprim with or without sulfamethoxazole [TMP] or cephalosporins [CPHs]) for UTI treatment in the outpatient setting between JAN 01, 2010 and DEC 31, 2019. Patients were included if at the index date they had at least 365 days of prior observation and were not hospitalised for any reason on or within 7 days prior to the index date. The primary outcome was AA/AD occurrence within 60 days of exposure, with secondary outcomes examining AA and AD separately. Cox proportional hazards models with 1:1 propensity score (PS) matching were used to estimate the risk, with results calibrated using negative control outcomes. Analyses were subjected to pre-defined study diagnostics, and only those passing all diagnostics were reported. Hazard ratios (HRs) were pooled using Bayesian random-effects meta-analysis.
//
Findings:
Among analyses that passed diagnostics there were 1,954,798 and 1,195,962 propensity-matched pairs for the FQ versus TMP and FQ versus CPH comparisons respectively. For the 60-day follow-up there was no difference in risk of AA/AD between FQ and TMP (absolute rate difference [ARD], 0.21 per 1000 person-year; calibrated HR, 0.91 [95% CI 0.73–1.10]). There was no significant difference in risk for FQ versus CPH (ARD, 0.11 per 1000 person-year; calibrated HR, 1.01 [95% CI 0.82–1.25]).
//
Interpretation:
This large-scale study used a rigorous design with objective diagnostics to address bias and confounding. There was no increased risk of AA/AD associated with FQ compared to TMP or CPH in patients treated for UTI in the outpatient setting. As we only examined FQ used to treat UTIs in the outpatient setting, the results may not be generalisable to other indications with different severity.
//
Funding:
Yonsei University College of Medicine, Government-wide R&D Fund project for infectious disease research (GFID), Republic of Korea, National Health and Medical Research Council (NHMRC) Australian Government. Department of Veterans Affairs (VA) Informatics and Computing Infrastructure (VINCI), Department of Veterans Affairs, the United States Government
Hospital discharge communications during care transitions for patients with acute kidney injury: a cross-sectional study
Predictors of diagnostic transition from major depressive disorder to bipolar disorder: a retrospective observational network study
Many patients with bipolar disorder (BD) are initially misdiagnosed with major depressive disorder (MDD) and are treated with antidepressants, whose potential iatrogenic effects are widely discussed. It is unknown whether MDD is a comorbidity of BD or its earlier stage, and no consensus exists on individual conversion predictors, delaying BD’s timely recognition and treatment. We aimed to build a predictive model of MDD to BD conversion and to validate it across a multi-national network of patient databases using the standardization afforded by the Observational Medical Outcomes Partnership (OMOP) common data model. Five “training” US databases were retrospectively analyzed: IBM MarketScan CCAE, MDCR, MDCD, Optum EHR, and Optum Claims. Cyclops regularized logistic regression models were developed on one-year MDD-BD conversion with all standard covariates from the HADES PatientLevelPrediction package. Time-to-conversion Kaplan-Meier analysis was performed up to a decade after MDD, stratified by model-estimated risk. External validation of the final prediction model was performed across 9 patient record databases within the Observational Health Data Sciences and Informatics (OHDSI) network internationally. The model’s area under the curve (AUC) varied 0.633–0.745 (µ = 0.689) across the five US training databases. Nine variables predicted one-year MDD-BD transition. Factors that increased risk were: younger age, severe depression, psychosis, anxiety, substance misuse, self-harm thoughts/actions, and prior mental disorder. AUCs of the validation datasets ranged 0.570–0.785 (µ = 0.664). An assessment algorithm was built for MDD to BD conversion that allows distinguishing as much as 100-fold risk differences among patients and validates well across multiple international data sources
Computerized clinical decision support systems for therapeutic drug monitoring and dosing: A decision-maker-researcher partnership systematic review
<p>Abstract</p> <p>Background</p> <p>Some drugs have a narrow therapeutic range and require monitoring and dose adjustments to optimize their efficacy and safety. Computerized clinical decision support systems (CCDSSs) may improve the net benefit of these drugs. The objective of this review was to determine if CCDSSs improve processes of care or patient outcomes for therapeutic drug monitoring and dosing.</p> <p>Methods</p> <p>We conducted a decision-maker-researcher partnership systematic review. Studies from our previous review were included, and new studies were sought until January 2010 in MEDLINE, EMBASE, Evidence-Based Medicine Reviews, and Inspec databases. Randomized controlled trials assessing the effect of a CCDSS on process of care or patient outcomes were selected by pairs of independent reviewers. A study was considered to have a positive effect (<it>i.e.</it>, CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive.</p> <p>Results</p> <p>Thirty-three randomized controlled trials were identified, assessing the effect of a CCDSS on management of vitamin K antagonists (14), insulin (6), theophylline/aminophylline (4), aminoglycosides (3), digoxin (2), lidocaine (1), or as part of a multifaceted approach (3). Cluster randomization was rarely used (18%) and CCDSSs were usually stand-alone systems (76%) primarily used by physicians (85%). Overall, 18 of 30 studies (60%) showed an improvement in the process of care and 4 of 19 (21%) an improvement in patient outcomes. All evaluable studies assessing insulin dosing for glycaemic control showed an improvement. In meta-analysis, CCDSSs for vitamin K antagonist dosing significantly improved time in therapeutic range.</p> <p>Conclusions</p> <p>CCDSSs have potential for improving process of care for therapeutic drug monitoring and dosing, specifically insulin and vitamin K antagonist dosing. However, studies were small and generally of modest quality, and effects on patient outcomes were uncertain, with no convincing benefit in the largest studies. At present, no firm recommendation for specific systems can be given. More potent CCDSSs need to be developed and should be evaluated by independent researchers using cluster randomization and primarily assess patient outcomes related to drug efficacy and safety.</p
Can computerized clinical decision support systems improve practitioners' diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review
<p>Abstract</p> <p>Background</p> <p>Underuse and overuse of diagnostic tests have important implications for health outcomes and costs. Decision support technology purports to optimize the use of diagnostic tests in clinical practice. The objective of this review was to assess whether computerized clinical decision support systems (CCDSSs) are effective at improving ordering of tests for diagnosis, monitoring of disease, or monitoring of treatment. The outcome of interest was effect on the diagnostic test-ordering behavior of practitioners.</p> <p>Methods</p> <p>We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews database, Inspec, and reference lists for eligible articles published up to January 2010. We included randomized controlled trials comparing the use of CCDSSs to usual practice or non-CCDSS controls in clinical care settings. Trials were eligible if at least one component of the CCDSS gave suggestions for ordering or performing a diagnostic procedure. We considered studies 'positive' if they showed a statistically significant improvement in at least 50% of test ordering outcomes.</p> <p>Results</p> <p>Thirty-five studies were identified, with significantly higher methodological quality in those published after the year 2000 (<it>p </it>= 0.002). Thirty-three trials reported evaluable data on diagnostic test ordering, and 55% (18/33) of CCDSSs improved testing behavior overall, including 83% (5/6) for diagnosis, 63% (5/8) for treatment monitoring, 35% (6/17) for disease monitoring, and 100% (3/3) for other purposes. Four of the systems explicitly attempted to reduce test ordering rates and all succeeded. Factors of particular interest to decision makers include costs, user satisfaction, and impact on workflow but were rarely investigated or reported.</p> <p>Conclusions</p> <p>Some CCDSSs can modify practitioner test-ordering behavior. To better inform development and implementation efforts, studies should describe in more detail potentially important factors such as system design, user interface, local context, implementation strategy, and evaluate impact on user satisfaction and workflow, costs, and unintended consequences.</p
- …
