198 research outputs found
Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.
Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability
Gender Differences in Relations of Smoking Status, Depression, and Suicidality in Korea: Findings from the Korea National Health and Nutrition Examination Survey 2008-2012
OBJECTIVE: As mental health problems may play an important role in initiating and maintaining cigarette smoking in females and there are an increasing number of female smokers, we evaluated the relationship between smoking status and mental health problems including depression and suicide ideation in women in Korea. METHODS: We analyzed the 5-year cumulative data (19 years of age or older, n=32,184) from the Korean National Health and Nutrition Examination Survey (KNHANES) conducted from 2008 to 2012. Logistic regression analyses were used to evaluate associations between cigarette smoking status and mental health parameters while controlling for potentially confounding variables. RESULTS: Among current smokers, females showed higher lifetime prevalence in having a depressive episode, a doctor-diagnosed major depression, a current diagnosis of depression, or receiving treatment for depression in comparison with males. In addition, females were more likely to report on having a depressive episode, suicidal ideation and attempts, and psychiatric counselling within the previous year, as compared to males. Female former smokers showed intermediate characteristics in parameters of mental health status within the previous year, ranking between lifetime non-smokers and the current smokers. CONCLUSION: Identifying the factors related to mental health status among current smokers can increase opportunities for an early intervention and help reduce the prevalence of smoking and increase smoking cessation rates particularly in females. Developing adaptive coping strategies other than smoking in female youth is potentially important in reducing the initiation of smoking
The stepped wedge trial design: a systematic review
BACKGROUND: Stepped wedge randomised trial designs involve sequential roll-out of an intervention to participants (individuals or clusters) over a number of time periods. By the end of the study, all participants will have received the intervention, although the order in which participants receive the intervention is determined at random. The design is particularly relevant where it is predicted that the intervention will do more good than harm (making a parallel design, in which certain participants do not receive the intervention unethical) and/or where, for logistical, practical or financial reasons, it is impossible to deliver the intervention simultaneously to all participants. Stepped wedge designs offer a number of opportunities for data analysis, particularly for modelling the effect of time on the effectiveness of an intervention. This paper presents a review of 12 studies (or protocols) that use (or plan to use) a stepped wedge design. One aim of the review is to highlight the potential for the stepped wedge design, given its infrequent use to date. METHODS: Comprehensive literature review of studies or protocols using a stepped wedge design. Data were extracted from the studies in three categories for subsequent consideration: study information (epidemiology, intervention, number of participants), reasons for using a stepped wedge design and methods of data analysis. RESULTS: The 12 studies included in this review describe evaluations of a wide range of interventions, across different diseases in different settings. However the stepped wedge design appears to have found a niche for evaluating interventions in developing countries, specifically those concerned with HIV. There were few consistent motivations for employing a stepped wedge design or methods of data analysis across studies. The methodological descriptions of stepped wedge studies, including methods of randomisation, sample size calculations and methods of analysis, are not always complete. CONCLUSION: While the stepped wedge design offers a number of opportunities for use in future evaluations, a more consistent approach to reporting and data analysis is required
Symptom Remission and Brain Cortical Networks at First Clinical Presentation of Psychosis: The OPTiMiSE Study
Individuals with psychoses have brain alterations, particularly in frontal and temporal cortices, that may be particularly prominent, already at illness onset, in those more likely to have poorer symptom remission following treatment with the first antipsychotic. The identification of strong neuroanatomical markers of symptom remission could thus facilitate stratification and individualized treatment of patients with schizophrenia. We used magnetic resonance imaging at baseline to examine brain regional and network correlates of subsequent symptomatic remission in 167 medication-naïve or minimally treated patients with first-episode schizophrenia, schizophreniform disorder, or schizoaffective disorder entering a three-phase trial, at seven sites. Patients in remission at the end of each phase were randomized to treatment as usual, with or without an adjunctive psycho-social intervention for medication adherence. The final follow-up visit was at 74 weeks. A total of 108 patients (70%) were in remission at Week 4, 85 (55%) at Week 22, and 97 (63%) at Week 74. We found no baseline regional differences in volumes, cortical thickness, surface area, or local gyrification between patients who did or did not achieved remission at any time point. However, patients not in remission at Week 74, at baseline showed reduced structural connectivity across frontal, anterior cingulate, and insular cortices. A similar pattern was evident in patients not in remission at Week 4 and Week 22, although not significantly. Lack of symptom remission in first-episode psychosis is not associated with regional brain alterations at illness onset. Instead, when the illness becomes a stable entity, its association with the altered organization of cortical gyrification becomes more defined
Neurodevelopmental milestones and associated behaviours are similar among healthy children across diverse geographical locations
It is unclear whether early child development is, like skeletal growth, similar across diverse regions with adequate health and nutrition. We prospectively assessed 1307 healthy, well-nourished 2-year-old children of educated mothers, enrolled in early pregnancy from urban areas without major socioeconomic or environmental constraints, in Brazil, India, Italy, Kenya and UK. We used a specially developed psychometric tool, WHO motor milestones and visual tests. Similarities across sites were measured using variance components analysis and standardised site differences (SSD). In 14 of the 16 domains, the percentage of total variance explained by between-site differences ranged from 1.3% (cognitive score) to 9.2% (behaviour score). Of the 80 SSD comparisons, only six were >±0.50 units of the pooled SD for the corresponding item. The sequence and timing of attainment of neurodevelopmental milestones and associated behaviours in early childhood are, therefore, likely innate and universal, as long as nutritional and health needs are met
Prospective, randomized, double-blind, multi-center, Phase III clinical study on transarterial chemoembolization (TACE) combined with Sorafenib® versus TACE plus placebo in patients with hepatocellular cancer before liver transplantation – HeiLivCa [ISRCTN24081794]
<p>Abstract</p> <p>Background</p> <p>Disease progression of hepatocellular cancer (HCC) in patients eligible for liver transplantation (LTx) occurs in up to 50% of patients, resulting in withdrawal from the LTx waiting list. Transarterial chemoembolization (TACE) is used as bridging therapy with highly variable response rates. The oral multikinase inhibitor sorafenib significantly increases overall survival and time-to-progression in patients with advanced hepatocellular cancer.</p> <p>Design</p> <p>The HeiLivCa study is a double-blinded, controlled, prospective, randomized multi-centre phase III trial. Patients in study arm A will be treated with transarterial chemoembolization plus sorafenib 400 mg bid. Patients in study arm B will be treated with transarterial chemoembolization plus placebo. A total of 208 patients with histologically confirmed hepatocellular carcinoma or HCC diagnosed according to EASL criteria will be enrolled. An interim patients' analysis will be performed after 60 events. Evaluation of time-to-progression as primary endpoint (TTP) will be performed at 120 events. Secondary endpoints are number of patients reaching LTx, disease control rates, OS, progression free survival, quality of live, toxicity and safety.</p> <p>Discussion</p> <p>As TACE is the most widely used primary treatment of HCC before LTx and sorafenib is the only proven effective systemic treatment for advanced HCC there is a strong rational to combine both treatment modalities. This study is designed to reveal potential superiority of the combined TACE plus sorafenib treatment over TACE alone and explore a new neo-adjuvant treatment concept in HCC before LTx.</p
Metformin:historical overview
Metformin (dimethylbiguanide) has become the preferred first-line oral blood glucose-lowering agent to manage type 2 diabetes. Its history is linked to Galega officinalis (also known as goat's rue), a traditional herbal medicine in Europe, found to be rich in guanidine, which, in 1918, was shown to lower blood glucose. Guanidine derivatives, including metformin, were synthesised and some (not metformin) were used to treat diabetes in the 1920s and 1930s but were discontinued due to toxicity and the increased availability of insulin. Metformin was rediscovered in the search for antimalarial agents in the 1940s and, during clinical tests, proved useful to treat influenza when it sometimes lowered blood glucose. This property was pursued by the French physician Jean Sterne, who first reported the use of metformin to treat diabetes in 1957. However, metformin received limited attention as it was less potent than other glucose-lowering biguanides (phenformin and buformin), which were generally discontinued in the late 1970s due to high risk of lactic acidosis. Metformin's future was precarious, its reputation tarnished by association with other biguanides despite evident differences. The ability of metformin to counter insulin resistance and address adult-onset hyperglycaemia without weight gain or increased risk of hypoglycaemia gradually gathered credence in Europe, and after intensive scrutiny metformin was introduced into the USA in 1995. Long-term cardiovascular benefits of metformin were identified by the UK Prospective Diabetes Study (UKPDS) in 1998, providing a new rationale to adopt metformin as initial therapy to manage hyperglycaemia in type 2 diabetes. Sixty years after its introduction in diabetes treatment, metformin has become the most prescribed glucose-lowering medicine worldwide with the potential for further therapeutic applications
An initial application of computerized adaptive testing (CAT) for measuring disability in patients with low back pain
<p>Abstract</p> <p>Background</p> <p>Recent approaches to outcome measurement involving Computerized Adaptive Testing (CAT) offer an approach for measuring disability in low back pain (LBP) in a way that can reduce the burden upon patient and professional. The aim of this study was to explore the potential of CAT in LBP for measuring disability as defined in the International Classification of Functioning, Disability and Health (ICF) which includes impairments, activity limitation, and participation restriction.</p> <p>Methods</p> <p>266 patients with low back pain answered questions from a range of widely used questionnaires. An exploratory factor analysis (EFA) was used to identify disability dimensions which were then subjected to Rasch analysis. Reliability was tested by internal consistency and person separation index (PSI). Discriminant validity of disability levels were evaluated by Spearman correlation coefficient (r), intraclass correlation coefficient [ICC(2,1)] and the Bland-Altman approach. A CAT was developed for each dimension, and the results checked against simulated and real applications from a further 133 patients.</p> <p>Results</p> <p>Factor analytic techniques identified two dimensions named "body functions" and "activity-participation". After deletion of some items for failure to fit the Rasch model, the remaining items were mostly free of Differential Item Functioning (DIF) for age and gender. Reliability exceeded 0.90 for both dimensions. The disability levels generated using all items and those obtained from the real CAT application were highly correlated (i.e. > 0.97 for both dimensions). On average, 19 and 14 items were needed to estimate the precise disability levels using the initial CAT for the first and second dimension. However, a marginal increase in the standard error of the estimate across successive iterations substantially reduced the number of items required to make an estimate.</p> <p>Conclusion</p> <p>Using a combination approach of EFA and Rasch analysis this study has shown that it is possible to calibrate items onto a single metric in a way that can be used to provide the basis of a CAT application. Thus there is an opportunity to obtain a wide variety of information to evaluate the biopsychosocial model in its more complex forms, without necessarily increasing the burden of information collection for patients.</p
- …
