33 research outputs found
Risk factors for methamphetamine use in youth: a systematic review
<p>Abstract</p> <p>Background</p> <p>Methamphetamine (MA) is a potent stimulant that is readily available. Its effects are similar to cocaine, but the drug has a profile associated with increased acute and chronic toxicities. The objective of this systematic review was to identify and synthesize literature on risk factors that are associated with MA use among youth.</p> <p>More than 40 electronic databases, websites, and key journals/meeting abstracts were searched. We included studies that compared children and adolescents (≤ 18 years) who used MA to those who did not. One reviewer extracted the data and a second checked for completeness and accuracy. For discrete risk factors, odds ratios (OR) were calculated and when appropriate, a pooled OR with 95% confidence intervals (95% CI) was calculated. For continuous risk factors, mean difference and 95% CI were calculated and when appropriate, a weighted mean difference (WMD) and 95% CI was calculated. Results were presented separately by comparison group: low-risk (no previous drug abuse) and high-risk children (reported previous drug abuse or were recruited from a juvenile detention center).</p> <p>Results</p> <p>Twelve studies were included. Among low-risk youth, factors associated with MA use were: history of heroin/opiate use (OR = 29.3; 95% CI: 9.8–87.8), family history of drug use (OR = 4.7; 95% CI: 2.8–7.9), risky sexual behavior (OR = 2.79; 95% CI: 2.25, 3.46) and some psychiatric disorders. History of alcohol use and smoking were also significantly associated with MA use. Among high-risk youth, factors associated with MA use were: family history of crime (OR = 2.0; 95% CI: 1.2–3.3), family history of drug use (OR = 4.7; 95% CI: 2.8–7.9), family history of alcohol abuse (OR = 3.2; 95% CI: 1.8–5.6), and psychiatric treatment (OR = 6.8; 95% CI: 3.6–12.9). Female sex was also significantly associated with MA use.</p> <p>Conclusion</p> <p>Among low-risk youth, a history of engaging in a variety of risky behaviors was significantly associated with MA use. A history of a psychiatric disorder was a risk factor for MA for both low- and high-risk youth. Family environment was also associated with MA use. Many of the included studies were cross-sectional making it difficult to assess causation. Future research should utilize prospective study designs so that temporal relationships between risk factors and MA use can be established.</p
The significance of the sense of coherence for various coping resources in stress situations used by police officers in on-the-beat service
Background: Police officers meet many stressors as part of their occupation. The psychological resource "sense of coherence" (SOC) protects against ill-health, but its impact on coping resources for stress situations has not been studied in the population of police officers. Different approaches to investigate the significance of SOC for different outcomes have been identified in literature, leading to some difficulties in the interpretation and generalization of results. The aim was therefore to explore SOC and the coping resources, and to examine the significance of SOC for various coping resources for stress using different models in a sample of Swedish police officers providing on-the-beat service. Materials and Methods: One hundred and one police officers (age: mean = 33 years, SD = 8; 29 females) were included, and the Orientation to Life Questionnaire (SOC-29) and the Coping Resources Inventory (CRI) were used. The dependent variable in each regression analysis was one of the coping resources: cognitive, social, emotional, spiritual/philosophical, physical, and a global resource. Global SOC-29 and/or its components (comprehensibility, manageability, and meaningfulness) were investigated as independent variables. Results: All CRI and SOC-29 scores except for that of spiritual/philosophical resources were higher than those of reference groups. Manageability was the most important component of SOC for various coping resources in stress situations used by police officers. Conclusion: A deeper study of manageability will give useful information, because this component of SOC is particularly significant in the variation in resources used by police officers to cope with stress. Salutogenesis, the origin of well-being, should be more in focus of future research on workplaces with a high level of occupational stress
Perivascular Fat and the Microcirculation: Relevance to Insulin Resistance, Diabetes, and Cardiovascular Disease
Type 2 diabetes and its major risk factor, obesity, are a growing burden for public health. The mechanisms that connect obesity and its related disorders, such as insulin resistance, type 2 diabetes, and hypertension, are still undefined. Microvascular dysfunction may be a pathophysiologic link between insulin resistance and hypertension in obesity. Many studies have shown that adipose tissue-derived substances (adipokines) interact with (micro)vascular function and influence insulin sensitivity. In the past, research focused on adipokines from perivascular adipose tissue (PVAT). In this review, we focus on the interactions between adipokines, predominantly from PVAT, and microvascular function in relation to the development of insulin resistance, diabetes, and cardiovascular disease
Pitfalls in computer housekeeping by doctors and nurses in KwaZulu-Natal: No malicious intent
Theoretical vs. empirical discriminability:the application of ROC methods to eyewitness identification
Abstract ᅟ Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d’ or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability
"You feel dirty a lot of the time" : policing 'dirty work', contamination and purification rituals
Following the controversial adoption of spit-hoods by some UK police forces, most recently by the London Metropolitan Police in February 2019, this article contributes to and extends debates on physical and symbolic contamination by drawing on established considerations of ‘dirty work’. The article argues that, for police officers, cleansing rituals are personal and subjective. As a relatively high-prestige occupation, police officers occupy a unique position in that they are protected by a status shield. Reflections from this ethnographic study suggest that the police uniform can be used as a vehicle for contamination and staff employ purification rituals and methods of taint management
Effects of pre-operative isolation on postoperative pulmonary complications after elective surgery: an international prospective cohort study
We aimed to determine the impact of pre-operative isolation on postoperative pulmonary complications after elective surgery during the global SARS-CoV-2 pandemic. We performed an international prospective cohort study including patients undergoing elective surgery in October 2020. Isolation was defined as the period before surgery during which patients did not leave their house or receive visitors from outside their household. The primary outcome was postoperative pulmonary complications, adjusted in multivariable models for measured confounders. Pre-defined sub-group analyses were performed for the primary outcome. A total of 96,454 patients from 114 countries were included and overall, 26,948 (27.9%) patients isolated before surgery. Postoperative pulmonary complications were recorded in 1947 (2.0%) patients of which 227 (11.7%) were associated with SARS-CoV-2 infection. Patients who isolated pre-operatively were older, had more respiratory comorbidities and were more commonly from areas of high SARS-CoV-2 incidence and high-income countries. Although the overall rates of postoperative pulmonary complications were similar in those that isolated and those that did not (2.1% vs 2.0%, respectively), isolation was associated with higher rates of postoperative pulmonary complications after adjustment (adjusted OR 1.20, 95%CI 1.05-1.36, p = 0.005). Sensitivity analyses revealed no further differences when patients were categorised by: pre-operative testing; use of COVID-19-free pathways; or community SARS-CoV-2 prevalence. The rate of postoperative pulmonary complications increased with periods of isolation longer than 3 days, with an OR (95%CI) at 4-7 days or ≥ 8 days of 1.25 (1.04-1.48), p = 0.015 and 1.31 (1.11-1.55), p = 0.001, respectively. Isolation before elective surgery might be associated with a small but clinically important increased risk of postoperative pulmonary complications. Longer periods of isolation showed no reduction in the risk of postoperative pulmonary complications. These findings have significant implications for global provision of elective surgical care
JPN Guidelines for the management of acute pancreatitis: epidemiology, etiology, natural history, and outcome predictors in acute pancreatitis
Acute pancreatitis is a common disease with an annual incidence of between 5 and 80 people per 100 000 of the population. The two major etiological factors responsible for acute pancreatitis are alcohol and cholelithiasis (gallstones). The proportion of patients with pancreatitis caused by alcohol or gallstones varies markedly in different countries and regions. The incidence of acute alcoholic pancreatitis is considered to be associated with high alcohol consumption. Although the incidence of alcoholic pancreatitis is much higher in men than in women, there is no difference in sexes in the risk involved after adjusting for alcohol intake. Other risk factors include endoscopic retrograde cholangiopancreatography, surgery, therapeutic drugs, HIV infection, hyperlipidemia, and biliary tract anomalies. Idiopathic acute pancreatitis is defined as acute pancreatitis in which the etiological factor cannot be specified. However, several studies have suggested that this entity includes cases caused by other specific disorders such as microlithiasis. Acute pancreatitis is a potentially fatal disease with an overall mortality of 2.1%–7.8%. The outcome of acute pancreatitis is determined by two factors that reflect the severity of the illness: organ failure and pancreatic necrosis. About half of the deaths in patients with acute pancreatitis occur within the first 1–2 weeks and are mainly attributable to multiple organ dysfunction syndrome (MODS). Depending on patient selection, necrotizing pancreatitis develops in approximately 10%–20% of patients and the mortality is high, ranging from 14% to 25% of these patients. Infected pancreatic necrosis develops in 30%–40% of patients with necrotizing pancreatitis and the incidence of MODS in such patients is high. The recurrence rate of acute pancreatitis is relatively high: almost half the patients with acute alcoholic pancreatitis experience a recurrence. When the gallstones are not treated, the risk of recurrence in gallstone pancreatitis ranges from 32% to 61%. After recovering from acute pancreatitis, about one-third to one-half of acute pancreatitis patients develop functional disorders, such as diabetes mellitus and fatty stool; the incidence of chronic pancreatitis after acute pancreatitis ranges from 3% to 13%. Nevertheless, many reports have shown that most patients who recover from acute pancreatitis regain good general health and return to their usual daily routine. Some authors have emphasized that endocrine function disorders are a common complication after severe acute pancreatitis has been treated by pancreatic resection
The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study
AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease
