74 research outputs found

    Middleborns disadvantaged? testing birth-order effects on fitness in pre-industrial finns

    Get PDF
    Parental investment is a limited resource for which offspring compete in order to increase their own survival and reproductive success. However, parents might be selected to influence the outcome of sibling competition through differential investment. While evidence for this is widespread in egg-laying species, whether or not this may also be the case in viviparous species is more difficult to determine. We use pre-industrial Finns as our model system and an equal investment model as our null hypothesis, which predicts that (all else being equal) middleborns should be disadvantaged through competition. We found no overall evidence to suggest that middleborns in a family are disadvantaged in terms of their survival, age at first reproduction or lifetime reproductive success. However, when considering birth-order only among same-sexed siblings, first-, middle-and lastborn sons significantly differed in the number of offspring they were able to rear to adulthood, although there was no similar effect among females. Middleborn sons appeared to produce significantly less offspring than first-or lastborn sons, but they did not significantly differ from lastborn sons in the number of offspring reared to adulthood. Our results thus show that taking sex differences into account is important when modelling birth-order effects. We found clear evidence of firstborn sons being advantaged over other sons in the family, and over firstborn daughters. Therefore, our results suggest that parents invest differentially in their offspring in order to both preferentially favour particular offspring or reduce offspring inequalities arising from sibling competition

    Determinants of serum concentrations of organochlorine compounds in Swedish pregnant women: a cross-sectional study

    Get PDF
    BACKGROUND: We performed a cross-sectional study of associations between personal characteristics and lipid-adjusted serum concentrations of certain PCB congeners and chlorinated pesticides/metabolites among 323 pregnant primiparous women from Uppsala County (age 18–41 years) sampled 1996–1999. METHODS: Extensive personal interviews and questionnaires about personal characteristics were performed both during and after pregnancy. Concentrations of organochlorine compounds in serum lipids in late pregnancy were analysed by gas chromatography. Associations between personal characteristics and serum levels of organochlorine compounds were analysed by multiple linear regression. RESULTS: Participation rate was 82% (325 of 395 women). Serum concentrations of PCB congeners IUPAC no. 28, 52, 101, 105 and 167, and o, p'-DDT and -DDE, p, p'-DDT and -DDD, oxychlordane, and γ- and α-HCH were in many cases below the limit of quantification (LOQ). No statistical analysis of associations with personal characteristics could be performed for these substances. Concentrations of PCB congeners IUPAC no. 118, 138, 153, 156 and 180, HCB, β-HCH, trans-nonachlor and p, p'-DDE increased with increased age and were highest in women sampled early during the 4 year study period. This shows that older women and women sampled early in the study had experienced the highest life-time exposure levels, probably mainly during childhood and adolescence. The importance of early exposures was supported by lower PCB concentrations and higher β-HCH and p, p'-DDE concentrations among women born in non-Nordic countries. Moreover, serum concentrations of certain PCBs and pesticide/metabolites were positively associated with consumption of fatty fish during adolescence, and concentrations of CB 156, CB 180 and p, p'-DDE increased significantly with number of months women had been breast-fed during infancy. Short-term changes in bodily constitution may, however, also influence serum concentrations, as suggested by negative associations between concentrations of organochlorine compounds and BMI before pregnancy and weight change during pregnancy. CONCLUSION: Although some of the associations could be caused by unknown personal characteristics confounding the results, our findings suggest that exposures to organochlorine compounds during childhood and adolescence influence the body burdens of the compounds during pregnancy

    Update of complications and functional outcome of the ileo-pouch anal anastomosis: overview of evidence and meta-analysis of 96 observational studies

    Get PDF
    Item does not contain fulltextOBJECTIVE: The objective of this study is to provide a comprehensive update of the outcome of the ileo-pouch anal anastomosis (IPAA). DATA SOURCES: An extensive search in PubMed, EMBASE, and The Cochrane Library was conducted. STUDY SELECTION AND DATA EXTRACTION: All studies published after 2000 reporting on complications or functional outcome after a primary open IPAA procedure for UC or FAP were selected. Study characteristics, functional outcome, and complications were extracted. DATA SYNTHESIS: A review with similar methodology conducted 10 years earlier was used to evaluate developments in outcome over time. Pooled estimates were compared using a random-effects logistic meta-analyzing technique. Analyses focusing on the effect of time of study conductance, centralization, and variation in surgical techniques were performed. RESULTS: Fifty-three studies including 14,966 patients were included. Pooled rates of pouch failure and pelvic sepsis were 4.3% (95% CI, 3.5-6.3) and 7.5% (95% CI 6.1-9.1), respectively. Compared to studies published before 2000, a reduction of 2.5% was observed in the pouch failure rate (p = 0.0038). Analysis on the effect of the time of study conductance confirmed a decline in pouch failure. Functional outcome remained stable over time, with a 24-h defecation frequency of 5.9 (95% CI, 5.0-6.9). Technical surgery aspects did not have an important effect on outcome. CONCLUSION: This review provides up to date outcome estimates of the IPAA procedure that can be useful as reference values for practice and research. It is also shows a reduction in pouch failure over time.1 juli 201

    Seasonal variations in pore water and sediment geochemistry of littoral lake sediments (Asylum Lake, MI, USA)

    Get PDF
    BACKGROUND: Seasonal changes in pore water and sediment redox geochemistry have been observed in many near-surface sediments. Such changes have the potential to strongly influence trace metal distribution and thus create seasonal fluctuations in metal mobility and bioavailability. RESULTS: Seasonal trends in pore water and sediment geochemistry are assessed in the upper 50 cm of littoral kettle lake sediments. Pore waters are always redox stratified, with the least compressed redox stratification observed during fall and the most compressed redox stratification observed during summer. A 2-step sequential sediment extraction yields much more Fe in the first step, targeted at amorphous Fe(III) (hydr)oxides (AEF), then in the second step, which targets Fe(II) monosulfides. Fe extracted in the second step is relatively invariant with depth or season. In contrast, AEF decreases with sediment depth, and is seasonally variable, in agreement with changes in redox stratification inferred from pore water profiles. A 5-step Tessier extraction scheme was used to assess metal association with operationally-defined exchangeable, carbonate, iron and manganese oxide (FMO), organic/sulfide and microwave-digestible residual fractions in cores collected during winter and spring. Distribution of metals in these two seasons is similar. Co, As, Cd, and U concentrations approach detection limits. Fe, Cu and Pb are mostly associated with the organics/sulfides fraction. Cr and Zn are mostly associated with FMO. Mn is primarily associated with carbonates, and Co is nearly equally distributed between the FMO and organics/sulfide fractions. CONCLUSION: This study clearly demonstrates that near-surface lake sediment pore water redox stratification and associated solid phase geochemistry vary significantly with season. This has important ramifications for seasonal changes in the bioavailability and mobility of trace elements. Without rate measurements, it is not possible to quantify the contribution of various processes to natural organic matter degradation. However, the pore water and solid phase data suggest that iron reduction and sulfate reduction are the dominant pathways in the upper 50 cm of these sediments

    Global burden of 87 risk factors in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background: Rigorous analysis of levels and trends in exposure to leading risk factors and quantification of their effect on human health are important to identify where public health is making progress and in which cases current efforts are inadequate. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 provides a standardised and comprehensive assessment of the magnitude of risk factor exposure, relative risk, and attributable burden of disease. Methods: GBD 2019 estimated attributable mortality, years of life lost (YLLs), years of life lived with disability (YLDs), and disability-adjusted life-years (DALYs) for 87 risk factors and combinations of risk factors, at the global level, regionally, and for 204 countries and territories. GBD uses a hierarchical list of risk factors so that specific risk factors (eg, sodium intake), and related aggregates (eg, diet quality), are both evaluated. This method has six analytical steps. (1) We included 560 risk–outcome pairs that met criteria for convincing or probable evidence on the basis of research studies. 12 risk–outcome pairs included in GBD 2017 no longer met inclusion criteria and 47 risk–outcome pairs for risks already included in GBD 2017 were added based on new evidence. (2) Relative risks were estimated as a function of exposure based on published systematic reviews, 81 systematic reviews done for GBD 2019, and meta-regression. (3) Levels of exposure in each age-sex-location-year included in the study were estimated based on all available data sources using spatiotemporal Gaussian process regression, DisMod-MR 2.1, a Bayesian meta-regression method, or alternative methods. (4) We determined, from published trials or cohort studies, the level of exposure associated with minimum risk, called the theoretical minimum risk exposure level. (5) Attributable deaths, YLLs, YLDs, and DALYs were computed by multiplying population attributable fractions (PAFs) by the relevant outcome quantity for each age-sex-location-year. (6) PAFs and attributable burden for combinations of risk factors were estimated taking into account mediation of different risk factors through other risk factors. Across all six analytical steps, 30 652 distinct data sources were used in the analysis. Uncertainty in each step of the analysis was propagated into the final estimates of attributable burden. Exposure levels for dichotomous, polytomous, and continuous risk factors were summarised with use of the summary exposure value to facilitate comparisons over time, across location, and across risks. Because the entire time series from 1990 to 2019 has been re-estimated with use of consistent data and methods, these results supersede previously published GBD estimates of attributable burden. Findings: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure. Global declines also occurred for tobacco smoking and lead exposure. The largest increases in risk exposure were for ambient particulate matter pollution, drug use, high fasting plasma glucose, and high body-mass index. In 2019, the leading Level 2 risk factor globally for attributable deaths was high systolic blood pressure, which accounted for 10·8 million (95% uncertainty interval [UI] 9·51–12·1) deaths (19·2% [16·9–21·3] of all deaths in 2019), followed by tobacco (smoked, second-hand, and chewing), which accounted for 8·71 million (8·12–9·31) deaths (15·4% [14·6–16·2] of all deaths in 2019). The leading Level 2 risk factor for attributable DALYs globally in 2019 was child and maternal malnutrition, which largely affects health in the youngest age groups and accounted for 295 million (253–350) DALYs (11·6% [10·3–13·1] of all global DALYs that year). The risk factor burden varied considerably in 2019 between age groups and locations. Among children aged 0–9 years, the three leading detailed risk factors for attributable DALYs were all related to malnutrition. Iron deficiency was the leading risk factor for those aged 10–24 years, alcohol use for those aged 25–49 years, and high systolic blood pressure for those aged 50–74 years and 75 years and older. Interpretation: Overall, the record for reducing exposure to harmful risks over the past three decades is poor. Success with reducing smoking and lead exposure through regulatory policy might point the way for a stronger role for public policy on other risks in addition to continued efforts to provide information on risk factor harm to the general public

    Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Achieving universal health coverage (UHC) involves all people receiving the health services they need, of high quality, without experiencing financial hardship. Making progress towards UHC is a policy priority for both countries and global institutions, as highlighted by the agenda of the UN Sustainable Development Goals (SDGs) and WHO's Thirteenth General Programme of Work (GPW13). Measuring effective coverage at the health-system level is important for understanding whether health services are aligned with countries' health profiles and are of sufficient quality to produce health gains for populations of all ages. Methods Based on the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, we assessed UHC effective coverage for 204 countries and territories from 1990 to 2019. Drawing from a measurement framework developed through WHO's GPW13 consultation, we mapped 23 effective coverage indicators to a matrix representing health service types (eg, promotion, prevention, and treatment) and five population-age groups spanning from reproductive and newborn to older adults (≥65 years). Effective coverage indicators were based on intervention coverage or outcome-based measures such as mortality-to-incidence ratios to approximate access to quality care; outcome-based measures were transformed to values on a scale of 0–100 based on the 2·5th and 97·5th percentile of location-year values. We constructed the UHC effective coverage index by weighting each effective coverage indicator relative to its associated potential health gains, as measured by disability-adjusted life-years for each location-year and population-age group. For three tests of validity (content, known-groups, and convergent), UHC effective coverage index performance was generally better than that of other UHC service coverage indices from WHO (ie, the current metric for SDG indicator 3.8.1 on UHC service coverage), the World Bank, and GBD 2017. We quantified frontiers of UHC effective coverage performance on the basis of pooled health spending per capita, representing UHC effective coverage index levels achieved in 2019 relative to country-level government health spending, prepaid private expenditures, and development assistance for health. To assess current trajectories towards the GPW13 UHC billion target—1 billion more people benefiting from UHC by 2023—we estimated additional population equivalents with UHC effective coverage from 2018 to 2023. Findings Globally, performance on the UHC effective coverage index improved from 45·8 (95% uncertainty interval 44·2–47·5) in 1990 to 60·3 (58·7–61·9) in 2019, yet country-level UHC effective coverage in 2019 still spanned from 95 or higher in Japan and Iceland to lower than 25 in Somalia and the Central African Republic. Since 2010, sub-Saharan Africa showed accelerated gains on the UHC effective coverage index (at an average increase of 2·6% [1·9–3·3] per year up to 2019); by contrast, most other GBD super-regions had slowed rates of progress in 2010–2019 relative to 1990–2010. Many countries showed lagging performance on effective coverage indicators for non-communicable diseases relative to those for communicable diseases and maternal and child health, despite non-communicable diseases accounting for a greater proportion of potential health gains in 2019, suggesting that many health systems are not keeping pace with the rising non-communicable disease burden and associated population health needs. In 2019, the UHC effective coverage index was associated with pooled health spending per capita (r=0·79), although countries across the development spectrum had much lower UHC effective coverage than is potentially achievable relative to their health spending. Under maximum efficiency of translating health spending into UHC effective coverage performance, countries would need to reach 1398pooledhealthspendingpercapita(US1398 pooled health spending per capita (US adjusted for purchasing power parity) in order to achieve 80 on the UHC effective coverage index. From 2018 to 2023, an estimated 388·9 million (358·6–421·3) more population equivalents would have UHC effective coverage, falling well short of the GPW13 target of 1 billion more people benefiting from UHC during this time. Current projections point to an estimated 3·1 billion (3·0–3·2) population equivalents still lacking UHC effective coverage in 2023, with nearly a third (968·1 million [903·5–1040·3]) residing in south Asia. Interpretation The present study demonstrates the utility of measuring effective coverage and its role in supporting improved health outcomes for all people—the ultimate goal of UHC and its achievement. Global ambitions to accelerate progress on UHC service coverage are increasingly unlikely unless concerted action on non-communicable diseases occurs and countries can better translate health spending into improved performance. Focusing on effective coverage and accounting for the world's evolving health needs lays the groundwork for better understanding how close—or how far—all populations are in benefiting from UHC

    Global burden of 369 diseases and injuries in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF

    Pooled analysis of who surgical safety checklist use and mortality after emergency laparotomy

    Get PDF
    Background: The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods: In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results: Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89⋅6 per cent) compared with that in countries with a middle (753 of 1242, 60⋅6 per cent; odds ratio (OR) 0⋅17, 95 per cent c.i. 0⋅14 to 0⋅21, P < 0⋅001) or low (363 of 860, 42⋅2 percent; OR 0⋅08, 0⋅07 to 0⋅10, P < 0⋅001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference −9⋅4 (95 per cent c.i. −11⋅9 to −6⋅9) per cent; P < 0⋅001), but the relationship was reversed in low-HDI countries (+12⋅1 (+7⋅0 to +17⋅3) per cent; P < 0⋅001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0⋅60, 0⋅50 to 0⋅73; P < 0⋅001). The greatest absolute benefit was seen for emergency surgery in low-and middle-HDI countries. Conclusion: Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries

    Global, regional, and national burden of suicide, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021

    Get PDF
    corecore