752 research outputs found

    Identifying and Seeing beyond Multiple Sequence Alignment Errors Using Intra-Molecular Protein Covariation

    Get PDF
    BACKGROUND: There is currently no way to verify the quality of a multiple sequence alignment that is independent of the assumptions used to build it. Sequence alignments are typically evaluated by a number of established criteria: sequence conservation, the number of aligned residues, the frequency of gaps, and the probable correct gap placement. Covariation analysis is used to find putatively important residue pairs in a sequence alignment. Different alignments of the same protein family give different results demonstrating that covariation depends on the quality of the sequence alignment. We thus hypothesized that current criteria are insufficient to build alignments for use with covariation analyses. METHODOLOGY/PRINCIPAL FINDINGS: We show that current criteria are insufficient to build alignments for use with covariation analyses as systematic sequence alignment errors are present even in hand-curated structure-based alignment datasets like those from the Conserved Domain Database. We show that current non-parametric covariation statistics are sensitive to sequence misalignments and that this sensitivity can be used to identify systematic alignment errors. We demonstrate that removing alignment errors due to 1) improper structure alignment, 2) the presence of paralogous sequences, and 3) partial or otherwise erroneous sequences, improves contact prediction by covariation analysis. Finally we describe two non-parametric covariation statistics that are less sensitive to sequence alignment errors than those described previously in the literature. CONCLUSIONS/SIGNIFICANCE: Protein alignments with errors lead to false positive and false negative conclusions (incorrect assignment of covariation and conservation, respectively). Covariation analysis can provide a verification step, independent of traditional criteria, to identify systematic misalignments in protein alignments. Two non-parametric statistics are shown to be somewhat insensitive to misalignment errors, providing increased confidence in contact prediction when analyzing alignments with erroneous regions because of an emphasis on they emphasize pairwise covariation over group covariation

    PERFIL SANITÁRIO DE UNIDADES AGRÍCOLAS FAMILIARES PRODUTORAS DE LEITE CRU E ADEQUAÇÃO À LEGISLAÇÃO VIGENTE

    Get PDF
    Resumo A região norte de Minas Gerais caracteriza-se por ser uma entre as regiões do país que têm o leite como um dos principais geradores de renda pela agricultura familiar. Para que o leite produzido tenha competitividade no mercado e tenha maior valor agregado, a agricultura familiar deve se adequar para atender os parâmetros legais vigentes. Teve-se como objetivo caracterizar o manejo produtivo geral adotado em unidades agrícolas familiares nos municípios de Bocaiúva, Francisco Sá e Montes Claros, no norte de Minas Gerais, identificando os entraves para a produção de leite dentro dos parâmetros estabelecidos pela legislação vigente. O sistema de produção foi avaliado por meio de coleta de dados em questionários e listas de verificação nos diferentes ambientes envolvidos com a produção. A qualidade microbiológica foi avaliada através da enumeração de microrganismos indicadores aeróbios mesófilos, psicrotróficos, Staphylococcus sp e coliformes fecais em leite cru, leite cru refrigerado, água e utensílios utilizados para produção. Os resultados da análise microbiológica associada às práticas de manejo adotadas revelaram que a principal causa da contaminação do leite era o emprego inadequado ou a ausência de boas práticas de higiene no sistema de produção, sendo este o principal entrave ao atendimento das exigências legais para o produto

    Global mortality associated with 33 bacterial pathogens in 2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Reducing the burden of death due to infection is an urgent global public health priority. Previous studies have estimated the number of deaths associated with drug-resistant infections and sepsis and found that infections remain a leading cause of death globally. Understanding the global burden of common bacterial pathogens (both susceptible and resistant to antimicrobials) is essential to identify the greatest threats to public health. To our knowledge, this is the first study to present global comprehensive estimates of deaths associated with 33 bacterial pathogens across 11 major infectious syndromes.Methods We estimated deaths associated with 33 bacterial genera or species across 11 infectious syndromes in 2019 using methods from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, in addition to a subset of the input data described in the Global Burden of Antimicrobial Resistance 2019 study. This study included 343 million individual records or isolates covering 11 361 study-location-years. We used three modelling steps to estimate the number of deaths associated with each pathogen: deaths in which infection had a role, the fraction of deaths due to infection that are attributable to a given infectious syndrome, and the fraction of deaths due to an infectious syndrome that are attributable to a given pathogen. Estimates were produced for all ages and for males and females across 204 countries and territories in 2019. 95% uncertainty intervals (UIs) were calculated for final estimates of deaths and infections associated with the 33 bacterial pathogens following standard GBD methods by taking the 2.5th and 97.5th percentiles across 1000 posterior draws for each quantity of interest.Findings From an estimated 13.7 million (95% UI 10.9-17.1) infection-related deaths in 2019, there were 7.7 million deaths (5.7-10.2) associated with the 33 bacterial pathogens (both resistant and susceptible to antimicrobials) across the 11 infectious syndromes estimated in this study. We estimated deaths associated with the 33 bacterial pathogens to comprise 13.6% (10.2-18.1) of all global deaths and 56.2% (52.1-60.1) of all sepsis-related deaths in 2019. Five leading pathogens-Staphylococcus aureus, Escherichia coli, Streptococcus pneumoniae, Klebsiella pneumoniae, and Pseudomonas aeruginosa-were responsible for 54.9% (52.9-56.9) of deaths among the investigated bacteria. The deadliest infectious syndromes and pathogens varied by location and age. The age-standardised mortality rate associated with these bacterial pathogens was highest in the sub-Saharan Africa super-region, with 230 deaths (185-285) per 100 000 population, and lowest in the high-income super-region, with 52.2 deaths (37.4-71.5) per 100 000 population. S aureus was the leading bacterial cause of death in 135 countries and was also associated with the most deaths in individuals older than 15 years, globally. Among children younger than 5 years, S pneumoniae was the pathogen associated with the most deaths. In 2019, more than 6 million deaths occurred as a result of three bacterial infectious syndromes, with lower respiratory infections and bloodstream infections each causing more than 2 million deaths and peritoneal and intra-abdominal infections causing more than 1 million deaths.Interpretation The 33 bacterial pathogens that we investigated in this study are a substantial source of health loss globally, with considerable variation in their distribution across infectious syndromes and locations. Compared with GBD Level 3 underlying causes of death, deaths associated with these bacteria would rank as the second leading cause of death globally in 2019; hence, they should be considered an urgent priority for intervention within the global health community. Strategies to address the burden of bacterial infections include infection prevention, optimised use of antibiotics, improved capacity for microbiological analysis, vaccine development, and improved and more pervasive use of available vaccines. These estimates can be used to help set priorities for vaccine need, demand, and development. Copyright (c) 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license

    Improving topological cluster reconstruction using calorimeter cell timing in ATLAS

    Get PDF
    Clusters of topologically connected calorimeter cells around cells with large absolute signal-to-noise ratio (topo-clusters) are the basis for calorimeter signal reconstruction in the ATLAS experiment. Topological cell clustering has proven performant in LHC Runs 1 and 2. It is, however, susceptible to out-of-time pile-up of signals from soft collisions outside the 25 ns proton-bunch-crossing window associated with the event’s hard collision. To reduce this effect, a calorimeter-cell timing criterion was added to the signal-to-noise ratio requirement in the clustering algorithm. Multiple versions of this criterion were tested by reconstructing hadronic signals in simulated events and Run 2 ATLAS data. The preferred version is found to reduce the out-of-time pile-up jet multiplicity by ∼50% for jet pT ∼ 20 GeV and by ∼80% for jet pT 50 GeV, while not disrupting the reconstruction of hadronic signals of interest, and improving the jet energy resolution by up to 5% for 20 < pT < 30 GeV. Pile-up is also suppressed for other physics objects based on topo-clusters (electrons, photons, τ -leptons), reducing the overall event size on disk by about 6% in early Run 3 pileup conditions. Offline reconstruction for Run 3 includes the timing requirement

    Software Performance of the ATLAS Track Reconstruction for LHC Run 3

    Get PDF
    Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pileup) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60 pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two

    Measurement and interpretation of same-sign W boson pair production in association with two jets in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This paper presents the measurement of fducial and diferential cross sections for both the inclusive and electroweak production of a same-sign W-boson pair in association with two jets (W±W±jj) using 139 fb−1 of proton-proton collision data recorded at a centre-of-mass energy of √ s = 13 TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed by selecting two same-charge leptons, electron or muon, and at least two jets with large invariant mass and a large rapidity diference. The measured fducial cross sections for electroweak and inclusive W±W±jj production are 2.92 ± 0.22 (stat.) ± 0.19 (syst.)fb and 3.38±0.22 (stat.)±0.19 (syst.)fb, respectively, in agreement with Standard Model predictions. The measurements are used to constrain anomalous quartic gauge couplings by extracting 95% confdence level intervals on dimension-8 operators. A search for doubly charged Higgs bosons H±± that are produced in vector-boson fusion processes and decay into a same-sign W boson pair is performed. The largest deviation from the Standard Model occurs for an H±± mass near 450 GeV, with a global signifcance of 2.5 standard deviations

    Performance and calibration of quark/gluon-jet taggers using 140 fb⁻¹ of pp collisions at √s=13 TeV with the ATLAS detector

    Get PDF
    The identification of jets originating from quarks and gluons, often referred to as quark/gluon tagging, plays an important role in various analyses performed at the Large Hadron Collider, as Standard Model measurements and searches for new particles decaying to quarks often rely on suppressing a large gluon-induced background. This paper describes the measurement of the efficiencies of quark/gluon taggers developed within the ATLAS Collaboration, using √s=13 TeV proton–proton collision data with an integrated luminosity of 140 fb-1 collected by the ATLAS experiment. Two taggers with high performances in rejecting jets from gluon over jets from quarks are studied: one tagger is based on requirements on the number of inner-detector tracks associated with the jet, and the other combines several jet substructure observables using a boosted decision tree. A method is established to determine the quark/gluon fraction in data, by using quark/gluon-enriched subsamples defined by the jet pseudorapidity. Differences in tagging efficiency between data and simulation are provided for jets with transverse momentum between 500 GeV and 2 TeV and for multiple tagger working points

    Accuracy versus precision in boosted top tagging with the ATLAS detector

    Get PDF
    The identification of top quark decays where the top quark has a large momentum transverse to the beam axis, known as top tagging, is a crucial component in many measurements of Standard Model processes and searches for beyond the Standard Model physics at the Large Hadron Collider. Machine learning techniques have improved the performance of top tagging algorithms, but the size of the systematic uncertainties for all proposed algorithms has not been systematically studied. This paper presents the performance of several machine learning based top tagging algorithms on a dataset constructed from simulated proton-proton collision events measured with the ATLAS detector at √s = 13 TeV. The systematic uncertainties associated with these algorithms are estimated through an approximate procedure that is not meant to be used in a physics analysis, but is appropriate for the level of precision required for this study. The most performant algorithms are found to have the largest uncertainties, motivating the development of methods to reduce these uncertainties without compromising performance. To enable such efforts in the wider scientific community, the datasets used in this paper are made publicly available

    Search for pair-produced higgsinos decaying via Higgs or Z bosons to final states containing a pair of photons and a pair of b-jets with the ATLAS detector

    Get PDF
    corecore