72 research outputs found
Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.
Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability
Models of epidemics: when contact repetition and clustering should be included
Background
The spread of infectious disease is determined by biological factors, e.g. the duration of the infectious period, and social factors, e.g. the arrangement of potentially contagious contacts. Repetitiveness and clustering of contacts are known to be relevant factors influencing the transmission of droplet or contact transmitted diseases. However, we do not yet completely know under what conditions repetitiveness and clustering should be included for realistically modelling disease spread.
Methods
We compare two different types of individual-based models: One assumes random mixing without repetition of contacts, whereas the other assumes that the same contacts repeat day-by-day. The latter exists in two variants, with and without clustering. We systematically test and compare how the total size of an outbreak differs between these model types depending on the key parameters transmission probability, number of contacts per day, duration of the infectious period, different levels of clustering and varying proportions of repetitive contacts.
Results
The simulation runs under different parameter constellations provide the following results: The difference between both model types is highest for low numbers of contacts per day and low transmission probabilities. The number of contacts and the transmission probability have a higher influence on this difference than the duration of the infectious period. Even when only minor parts of the daily contacts are repetitive and clustered can there be relevant differences compared to a purely random mixing model.
Conclusion
We show that random mixing models provide acceptable estimates of the total outbreak size if the number of contacts per day is high or if the per-contact transmission probability is high, as seen in typical childhood diseases such as measles. In the case of very short infectious periods, for instance, as in Norovirus, models assuming repeating contacts will also behave similarly as random mixing models. If the number of daily contacts or the transmission probability is low, as assumed for MRSA or Ebola, particular consideration should be given to the actual structure of potentially contagious contacts when designing the model.ISSN:1742-468
Extracting key information from historical data to quantify the transmission dynamics of smallpox
<p>Abstract</p> <p>Background</p> <p>Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data.</p> <p>Main findings</p> <p>First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1) The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2) The highest frequency (61.8%) of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3) The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions.</p> <p>Conclusion</p> <p>Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing) taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.</p
Why Pleiotropic Interventions are Needed for Alzheimer's Disease
Alzheimer's disease (AD) involves a complex pathological cascade thought to be initially triggered by the accumulation of β-amyloid (Aβ) peptide aggregates or aberrant amyloid precursor protein (APP) processing. Much is known of the factors initiating the disease process decades prior to the onset of cognitive deficits, but an unclear understanding of events immediately preceding and precipitating cognitive decline is a major factor limiting the rapid development of adequate prevention and treatment strategies. Multiple pathways are known to contribute to cognitive deficits by disruption of neuronal signal transduction pathways involved in memory. These pathways are altered by aberrant signaling, inflammation, oxidative damage, tau pathology, neuron loss, and synapse loss. We need to develop stage-specific interventions that not only block causal events in pathogenesis (aberrant tau phosphorylation, Aβ production and accumulation, and oxidative damage), but also address damage from these pathways that will not be reversed by targeting prodromal pathways. This approach would not only focus on blocking early events in pathogenesis, but also adequately correct for loss of synapses, substrates for neuroprotective pathways (e.g., docosahexaenoic acid), defects in energy metabolism, and adverse consequences of inappropriate compensatory responses (aberrant sprouting). Monotherapy targeting early single steps in this complicated cascade may explain disappointments in trials with agents inhibiting production, clearance, or aggregation of the initiating Aβ peptide or its aggregates. Both plaque and tangle pathogenesis have already reached AD levels in the more vulnerable brain regions during the “prodromal” period prior to conversion to “mild cognitive impairment (MCI).” Furthermore, many of the pathological events are no longer proceeding in series, but are going on in parallel. By the MCI stage, we stand a greater chance of success by considering pleiotropic drugs or cocktails that can independently limit the parallel steps of the AD cascade at all stages, but that do not completely inhibit the constitutive normal functions of these pathways. Based on this hypothesis, efforts in our laboratories have focused on the pleiotropic activities of omega-3 fatty acids and the anti-inflammatory, antioxidant, and anti-amyloid activity of curcumin in multiple models that cover many steps of the AD pathogenic cascade (Cole and Frautschy, Alzheimers Dement 2:284–286, 2006)
Pooled analysis of who surgical safety checklist use and mortality after emergency laparotomy
Background: The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods: In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results: Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89⋅6 per cent) compared with that in countries with a middle (753 of 1242, 60⋅6 per cent; odds ratio (OR) 0⋅17, 95 per cent c.i. 0⋅14 to 0⋅21, P < 0⋅001) or low (363 of 860, 42⋅2 percent; OR 0⋅08, 0⋅07 to 0⋅10, P < 0⋅001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference −9⋅4 (95 per cent c.i. −11⋅9 to −6⋅9) per cent; P < 0⋅001), but the relationship was reversed in low-HDI countries (+12⋅1 (+7⋅0 to +17⋅3) per cent; P < 0⋅001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0⋅60, 0⋅50 to 0⋅73; P < 0⋅001). The greatest absolute benefit was seen for emergency surgery in low-and middle-HDI countries. Conclusion: Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries
The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study
AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease
Can informal social distancing interventions minimize demand for antiviral treatment during a severe pandemic?
Diagnostic and treatment modalities for patients with cervical lymph node metastases of unknown primary site – current status and challenges
A COMPARATIVE STUDY OF TWO TYPES OF LATEX SURGICAL GLOVES IN ELECTIVE ORTHOPAEDIC SURGERY
- …
