423 research outputs found
Liver transplantation in the critically ill: a multicenter Canadian retrospective cohort study
Introduction: Critically ill cirrhosis patients awaiting liver transplantation (LT) often receive prioritization for organ allocation. Identification of patients most likely to benefit is essential. The purpose of this study was to examine whether the Sequential Organ Failure Assessment (SOFA) score can predict 90-day mortality in critically ill recipients of LT and whether it can predict receipt of LT among critically ill cirrhosis listed awaiting LT. Methods: We performed a multicenter retrospective cohort study consisting of two datasets: (a) all critically-ill cirrhosis patients requiring intensive care unit (ICU) admission before LT at five transplant centers in Canada from 2000 through 2009 (one site, 1990 through 2009), and (b) critically ill cirrhosis patients receiving LT from ICU (n = 115) and those listed but not receiving LT before death (n = 106) from two centers where complete data were available. Results: In the first dataset, 198 critically ill cirrhosis patients receiving LT (mean (SD) age 53 (10) years, 66% male, median (IQR) model for end-stage liver disease (MELD) 34 (26-39)) were included. Mean (SD) SOFA scores at ICU admission, at 48 hours, and at LT were 12.5 (4), 13.0 (5), and 14.0 (4). Survival at 90 days was 84% (n = 166). In multivariable analysis, only older age was independently associated with reduced 90-day survival (odds ratio (OR), 1.07; 95% CI, 1.01 to 1.14; P = 0.013). SOFA score did not predict 90-day mortality at any time. In the second dataset, 47.9% (n = 106) of cirrhosis patients listed for LT died in the ICU waiting for LT. In multivariable analysis, higher SOFA at 48 hours after admission was independently associated with lower probability of receiving LT (OR, 0.89; 95% CI, 0.82 to 0.97; P = 0.006). When including serum lactate and SOFA at 48 hours in the final model, elevated lactate (at 48 hours) was also significantly associated with lower likelihood of receiving LT (0.32; 0.17 to 0.61; P = 0.001). Conclusions: SOFA appears poor at predicting 90-day survival in critically ill cirrhosis patients after LT, but higher SOFA score and elevated lactate 48 hours after ICU admission are associated with a lower probability receiving LT. Older critically ill cirrhosis patients (older than 60) receiving LT have worse 90-day survival and should be considered for LT with caution
Screening programmes for the early detection and prevention of oral cancer
Background Oral cancer is an important global healthcare problem, its incidence is increasing and late-stage presentation is common. Screening programmes have been introduced for a number of major cancers and have proved effective in their early detection. Given the high morbidity and mortality rates associated with oral cancer, there is a need to determine the effectiveness of a screening programme for this disease, either as a targeted, opportunistic or population based measure. Evidence exists from modelled data that a visual oral examination of high-risk individuals may be a cost-effective screening strategy and the development and use of adjunctive aids and biomarkers is becoming increasingly common. Objectives To assess the effectiveness of current screening methods in decreasing oral cancer mortality. Search strategy The following electronic databases were searched: the Cochrane Oral Health Group Trials Register (to 20 May 2010), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, Issue 2), MEDLINE via OVID (1950 to 20 May 2010), EMBASE viaOVID(1980 to 20 May 2010) and CANCERLIT via PubMed (1950 to 20 May 2010). There were no restrictions regarding language or date of publication. Selection criteria Randomised controlled trials (RCTs) of screening for oral cancer or potentially malignant disorders using visual examination, toluidine blue, fluorescence imaging or brush biopsy. Data collection and analysis The original review identified 1389 citations and this update identified an additional 330 studies, highlighting 1719 studies for consideration. Only one study met the inclusion criteria and validity assessment, data extraction and statistics evaluation were undertaken by six independent review authors. Main results One 9-year RCT has been included (n = 13 clusters: 191,873 participants). There was no statistically significant difference in the age-standardised oral cancer mortality rates for the screened group (16.4/100,000 person-years) and the control group (20.7/100,000 person-years). A 43% reduction in mortality was reported between the intervention cohort (29.9/100,000 person-years) and the control arm (45.4/100,000) for high-risk individuals who used tobacco or alcohol or both, which was statistically significant. However, this study had a number of methodological weaknesses and the associated risk of bias was high. Authors' conclusion Although there is evidence that a visual examination as part of a population based screening programme reduced the mortality rate of oral cancer in high-risk individuals, whilst producing a stage shift and improvement in survival rates across the population as a whole, the evidence is limited to one study and is associated with a high risk of bias. This was compounded by the fact that the effect of cluster randomisation was not accounted for in the analysis. Furthermore, no robust evidence was identified to support the use of other adjunctive technologies like toluidine blue, brush biopsy or fluorescence imaging within a primary care environment. Further randomised controlled trials are recommended to assess the efficacy, effectiveness and cost-effectiveness of a visual examination as part of a population based screening programme. This review is published as a Cochrane Review in the Cochrane Database of Systematic Reviews 2010, Issue 11. Cochrane Reviews are regularly updated as new evidence emerges and in response to comments and criticisms, and the Cochrane Database of Systematic Reviews should be consulted for the most recent version of the Review.</p
The victim-witness experience in the Special Court for Sierra Leone.
This paper reports the findings of an interview study of 144 victim-witnesses who testified in the Special Court for Sierra Leone (SCSL). Witnesses expressed satisfaction with the preparation they received for testifying from their lawyers, particularly appreciating emotional support, as well as practical preparations. Victim-witnesses generally evaluated their interactions with all court staff positively, and reported feeling well-treated by the Court. The experience of cross-examination was difficult for a large proportion of witnesses in the current study, but an even larger group of witnesses reported the experience to be positive. For some witnesses, the experience of successfully coping with the challenge of cross-examination may be empowering. The feelings reportedly experienced by witnesses during their testimony are similarly mixed: a large proportion reported painful feelings, but others reported feeling confident, relieved and happy when they testified. The importance of continued post-testimony
contact with witnesses is supported by the current study; witnesses expressed a strong desire for ongoing contact with the SCSL. According to witnesses' own evaluations, their security was not negatively affected by their involvement with the court. This indicates that the SCSL has been largely successful in its attempt to protect the identities of those who testify in its trials.sch_iihVol. 1pub1125pu
Screening programmes for the early detection and prevention of oral cancer
Oral cancer is an important global healthcare problem, its incidence is increasing and late-stage presentation is common. Screening programmes have been introduced for a number of major cancers and have proved effective in their early detection. Given the high morbidity and mortality rates associated with oral cancer, there is a need to determine the effectiveness of a screening programme for this disease, either as a targeted, opportunistic or population-based measure. Evidence exists from modelled data that a visual oral examination of high-risk individuals may be a cost-effective screening strategy and the development and use of adjunctive aids and biomarkers is becoming increasingly common
Lifelong Cerebrovascular Disease Burden Among CADASIL Patients: Analysis From a Global Health Research Network
INTRODUCTION: Data reporting on patients with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) within the United States population is limited. We sought to evaluate the overt cerebrovascular disease burden among patients with CADASIL.
METHODS: Harmonized electronic medical records were extracted from the TriNetX global health research network. CADASIL patients were identified using diagnostic codes and those with/without history of documented stroke sub-types (ischemic stroke [IS], intracerebral hemorrhage [ICH], subarachnoid hemorrhage [SAH] and transient ischemic attack [TIA]) were compared. Adjusted odds ratios (OR) and 95% confidence intervals (CI) of stroke incidence and mortality associated with sex were computed.
RESULTS: Between September 2018 and April 2020, 914 CADASIL patients were identified (median [IQR] age: 60 [50-69], 61.3% females); of whom 596 (65.2%) had documented cerebrovascular events (i.e., CADASIL-Stroke patients). Among CADASIL-Stroke patients, 89.4% experienced an IS, co-existing with TIAs in 27.7% and hemorrhagic strokes in 6.2%; initial stroke events occurred ≤65 years of age in 71% of patients. CADASIL-Stroke patients (vs. CADASIL-non-Stroke) had higher cardiovascular and neurological (migraines, cognitive impairment, epilepsy/seizures, mood disorders) burden. In age- and comorbidity-adjusted models, males had higher associated risk of stroke onset (OR: 1.37, CI: 1.01-1.86). Mortality risk was higher for males (OR: 2.72, CI: 1.53-4.84).
DISCUSSION: Early screening and targeted treatment strategies are warranted to help CADASIL patients with symptom management and risk mitigation
Clinical assessment to screen for the detection of oral cavity cancer and potentially malignant disorders in apparently healthy adults
Background: The early detection and excision of potentially malignant disorders (PMD) of the lip and oral cavity that require intervention may reduce malignant transformations (though will not totally eliminate malignancy occurring), or if malignancy is detected during surveillance, there is some evidence that appropriate treatment may improve survival rates.Objectives: To estimate the diagnostic accuracy of conventional oral examination (COE), vital rinsing, light‐based detection, biomarkers and mouth self examination (MSE), used singly or in combination, for the early detection of PMD or cancer of the lip and oral cavity in apparently healthy adults.Search methods: We searched MEDLINE (OVID) (1946 to April 2013) and four other electronic databases (the Cochrane Diagnostic Test Accuracy Studies Register, the Cochrane Oral Health Group's Trials Register, EMBASE (OVID), and MEDION) from inception to April 2013. The electronic databases were searched on 30 April 2013. There were no restrictions on language in the searches of the electronic databases. We conducted citation searches, and screened reference lists of included studies for additional references.Selection criteria: We selected studies that reported the diagnostic test accuracy of any of the aforementioned tests in detecting PMD or cancer of the lip or oral cavity. Diagnosis of PMD or cancer was made by specialist clinicians or pathologists, or alternatively through follow‐up.Data collection and analysis: Two review authors independently screened titles and abstracts for relevance. Eligibility, data extraction and quality assessment were carried out by at least two authors independently and in duplicate. Studies were assessed for methodological quality using QUADAS‐2. We reported the sensitivity and specificity of the included studies.Main results: Thirteen studies, recruiting 68,362 participants, were included. These studies evaluated the diagnostic accuracy of COE (10 studies), MSE (two studies). One randomised controlled of test accuracy trial directly evaluated COE and vital rinsing. There were no eligible diagnostic accuracy studies evaluating light‐based detection or blood or salivary sample analysis (which tests for the presence of bio‐markers of PMD and oral cancer). Given the clinical heterogeneity of the included studies in terms of the participants recruited, setting, prevalence of target condition, the application of the index test and reference standard and the flow and timing of the process, the data could not be pooled. For COE (10 studies, 25,568 participants), prevalence in the diagnostic test accuracy sample ranged from 1% to 51%. For the eight studies with prevalence of 10% or lower, the sensitivity estimates were highly variable, and ranged from 0.50 (95% confidence interval (CI) 0.07 to 0.93) to 0.99 (95% CI 0.97 to 1.00) with uniform specificity estimates around 0.98 (95% CI 0.97 to 1.00). Estimates of sensitivity and specificity were 0.95 (95% CI 0.92 to 0.97) and 0.81 (95% CI 0.79 to 0.83) for one study with prevalence of 22% and 0.97 (95% CI 0.96 to 0.98) and 0.75 (95% CI 0.73 to 0.77) for one study with prevalence of 51%. Three studies were judged to be at low risk of bias overall; two were judged to be at high risk of bias resulting from the flow and timing domain; and for five studies the overall risk of bias was judged as unclear resulting from insufficient information to form a judgement for at least one of the four quality assessment domains. Applicability was of low concern overall for two studies; high concern overall for three studies due to high risk population, and unclear overall applicability for five studies. Estimates of sensitivity for MSE (two studies, 34,819 participants) were 0.18 (95% CI 0.13 to 0.24) and 0.33 (95% CI 0.10 to 0.65); specificity for MSE was 1.00 (95% CI 1.00 to 1.00) and 0.54 (95% CI 0.37 to 0.69). One study (7975 participants) directly compared COE with COE plus vital rinsing in a randomised controlled trial. This study found a higher detection rate for oral cavity cancer in the conventional oral examination plus vital rinsing adjunct trial arm.Authors' conclusions: The prevalence of the target condition both between and within index tests varied considerably. For COE estimates of sensitivity over the range of prevalence levels varied widely. Observed estimates of specificity were more homogeneous. Index tests at a prevalence reported in the population (between 1% and 5%) were better at correctly classifying the absence of PMD or oral cavity cancer in disease‐free individuals that classifying the presence in diseased individuals. Incorrectly classifying disease‐free individuals as having the disease would have clinical and financial implications following inappropriate referral; incorrectly classifying individuals with the disease as disease‐free will mean PMD or oral cavity cancer will only be diagnosed later when the disease will be more severe. General dental practitioners and dental care professionals should remain vigilant for signs of PMD and oral cancer whilst performing routine oral examinations in practice
Stroke severity mediates the effect of socioeconomic disadvantage on poor outcomes among patients with intracerebral hemorrhage
BackgroundSocioeconomic deprivation drives poor functional outcomes after intracerebral hemorrhage (ICH). Stroke severity and background cerebral small vessel disease (CSVD) burden have each been linked to socioeconomic status and independently contribute to worse outcomes after ICH, providing distinct, plausible pathways for the effects of deprivation. We investigate whether admission stroke severity or cerebral small vessel disease (CSVD) mediates the effect of socioeconomic deprivation on 90-day functional outcomes.MethodsElectronic medical record data, including demographics, treatments, comorbidities, and physiological data, were analyzed. CSVD burden was graded from 0 to 4, with severe CSVD categorized as ≥3. High deprivation was assessed for patients in the top 30% of state-level area deprivation index scores. Severe disability or death was defined as a 90-day modified Rankin Scale score of 4–6. Stroke severity (NIH stroke scale (NIHSS)) was classified as: none (0), minor (1–4), moderate (5–15), moderate–severe (16–20), and severe (21+). Univariate and multivariate associations with severe disability or death were determined, with mediation evaluated through structural equation modelling.ResultsA total of 677 patients were included (46.8% female; 43.9% White, 27.0% Black, 20.7% Hispanic, 6.1% Asian, 2.4% Other). In univariable modelling, high deprivation (odds ratio: 1.54; 95% confidence interval: [1.06–2.23]; p = 0.024), severe CSVD (2.14 [1.42–3.21]; p < 0.001), moderate (8.03 [2.76–17.15]; p < 0.001), moderate–severe (32.79 [11.52–93.29]; p < 0.001), and severe stroke (104.19 [37.66–288.12]; p < 0.001) were associated with severe disability or death. In multivariable modelling, severe CSVD (3.42 [1.75–6.69]; p < 0.001) and moderate (5.84 [2.27–15.01], p < 0.001), moderate–severe (27.59 [7.34–103.69], p < 0.001), and severe stroke (36.41 [9.90–133.85]; p < 0.001) independently increased odds of severe disability or death; high deprivation did not. Stroke severity mediated 94.1% of deprivation’s effect on severe disability or death (p = 0.005), while CSVD accounted for 4.9% (p = 0.524).ConclusionCSVD contributed to poor functional outcome independent of socioeconomic deprivation, while stroke severity mediated the effects of deprivation. Improving awareness and trust among disadvantaged communities may reduce admission stroke severity and improve outcomes
Decade-Long Nationwide Trends and Disparities in Use of Comfort Care Interventions for Patients With Ischemic Stroke
Background Stroke remains one of the leading causes of disability and death in the United States. We characterized 10-year nationwide trends in use of comfort care interventions (CCIs) among patients with ischemic stroke, particularly pertaining to acute thrombolytic therapy with intravenous tissue-type plasminogen activator and endovascular thrombectomy, and describe in-hospital outcomes and costs. Methods and Results We analyzed the National Inpatient Sample from 2006 to 2015 and identified adult patients with ischemic stroke with or without thrombolytic therapy and CCIs using validate
Preliminary neurocognitive finding from a multi-site study investing long-term neurological impact of COVID-19 using ultra-high field 7 Tesla MRI-based neuroimaging
Background: Globally, over six hundred million cases of SARS-CoV-2 have been confirmed. As the number of individuals in recovery rises, examining long-term neurological effects, including cognitive impairment and cerebral microstructural and microvascular changes, has become paramount., We present preliminary cognitive findings from an ongoing multi-site study investigating the long-term neurological impacts of COVID-19 using 7 Tesla MRI-based neuroimaging.
Methods: Across 3 US and 1 UK sites, we identified adult (\u3e=18) COVID-19 survivors (CS) and healthy controls (HC) without significant pre-existing medical, neurological, or psychiatric illness. Using the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS-3) battery and Norms Calculator, 12 cognitive scores were adjusted for age, sex, and education and classified as either unimpaired or mild (\u3c9th percentile), moderate (\u3c2nd percentile), or severely impaired (\u3c1st percentile). The observed frequency of impairment across the two groups is reported along with proportional differences (PD) and confidence intervals (CI). Illness severity and time since infection were evaluated as potential associates of cognitive impairment.
Results: Over a period of 11 months, we enrolled 108 participants. At the time of reporting, 80 (46.3% female; mean age: 60.3 ± 8.6; 61 CS, 19 HC) had completed cognitive assessments. Of the participants for whom we ascertained time since symptom onset and illness severity (n=51 and 43, respectively), 31.4% had their index COVID-19 infection within the past year, and 60.5% had a severe to critical infection (Table 1). Table 2 reports observed frequency of impairment for each metric. Aggregating all 12 cognitive metrics, we found 45 (73.8%) of CS had at least one impairment [vs HC: 10 (52.6%)]. A significantly greater proportion of CS had at least one moderate to severe or severe impairment (Figure 1). CS also had significantly higher frequencies of presenting with two or more mild to severe impairments [PD 0.33 (0.13, 0.54)]. Illness severity and time since infection were not significantly associated with cognitive impairment.
Conclusion: Our preliminary results are consistent with potentially sustained COVID-associated cognitive impairment in a subset of participants. Enrollment in the multi-site cohort is ongoing, and updated results will be presented along with ultra-high field MRI-based neuroimaging correlates
- …
