1,525 research outputs found

    \u3ci\u3eBolbophorus damnificus\u3c/i\u3e n. sp. (Digenea: Bolbophoridae) from the Channel Catfish \u3ci\u3eIctalurus punctatus\u3c/i\u3e and American White Pelican \u3ci\u3ePelecanus erythrorhynchos\u3c/i\u3e in the USA Based on Life-Cycle and Molecular Data

    Get PDF
    The common pathogenic prodiplostomulum metacercaria in the flesh, mostly near the skin, of pond-produced channel catfish Ictalurus punctatus has been demonstrated to be Bolbophorus damnificus Overstreet & Curran n. sp. The catfish acquires the infection from the snail Planorbella trivolvis, the only known first intermediate host, and the species is perpetuated through the American white pelican Pelecanus erythrorhynchos, as confirmed by experimental infections with nestling and dewormed adult pelican specimens in conjunction with molecular data. It differs from the cryptic species Bolbophorus sp., also found concurrently in the American white pelican, by having eggs 123–129 μm rather than 100–112 μm long and consistent low values for nucleotide percentage sequence similarity comparing COI, ITS 1/2, 18S rRNA and 28S rRNA fragments. Bolbophorus sp. is comparable but most likely distinct from B. confusus (Kraus, 1914), which occurs in Europe and has eggs 90–102 μm long. Its intermediate hosts were not demonstrated. The adults of neither of the confirmed North American species of Bolbophorus were encountered in any bird other than a pelican, although several shore birds feed on infected catfish, and B. damnificus can survive but not mature when protected in the mouse abdominal cavity. B. ictaluri (Haderlie, 1953) Overstreet & Curran n. comb., a species different from B. damnificus, is considered a species inquirenda

    Measurement of the fractional radiation length of a pixel module for the CMS Phase-2 upgrade via the multiple scattering of positrons

    Get PDF

    Evaluation of planar silicon pixel sensors with the RD53A readout chip for the Phase-2 Upgrade of the CMS Inner Tracker

    Get PDF

    Identification and reconstruction of low-energy electrons in the ProtoDUNE-SP detector

    Full text link
    Measurements of electrons from νe\nu_e interactions are crucial for the Deep Underground Neutrino Experiment (DUNE) neutrino oscillation program, as well as searches for physics beyond the standard model, supernova neutrino detection, and solar neutrino measurements. This article describes the selection and reconstruction of low-energy (Michel) electrons in the ProtoDUNE-SP detector. ProtoDUNE-SP is one of the prototypes for the DUNE far detector, built and operated at CERN as a charged particle test beam experiment. A sample of low-energy electrons produced by the decay of cosmic muons is selected with a purity of 95%. This sample is used to calibrate the low-energy electron energy scale with two techniques. An electron energy calibration based on a cosmic ray muon sample uses calibration constants derived from measured and simulated cosmic ray muon events. Another calibration technique makes use of the theoretically well-understood Michel electron energy spectrum to convert reconstructed charge to electron energy. In addition, the effects of detector response to low-energy electron energy scale and its resolution including readout electronics threshold effects are quantified. Finally, the relation between the theoretical and reconstructed low-energy electron energy spectrum is derived and the energy resolution is characterized. The low-energy electron selection presented here accounts for about 75% of the total electron deposited energy. After the addition of lost energy using a Monte Carlo simulation, the energy resolution improves from about 40% to 25% at 50~MeV. These results are used to validate the expected capabilities of the DUNE far detector to reconstruct low-energy electrons.Comment: 19 pages, 10 figure

    Impact of cross-section uncertainties on supernova neutrino spectral parameter fitting in the Deep Underground Neutrino Experiment

    Get PDF
    A primary goal of the upcoming Deep Underground Neutrino Experiment (DUNE) is to measure the O(10)\mathcal{O}(10) MeV neutrinos produced by a Galactic core-collapse supernova if one should occur during the lifetime of the experiment. The liquid-argon-based detectors planned for DUNE are expected to be uniquely sensitive to the νe\nu_e component of the supernova flux, enabling a wide variety of physics and astrophysics measurements. A key requirement for a correct interpretation of these measurements is a good understanding of the energy-dependent total cross section σ(Eν)\sigma(E_\nu) for charged-current νe\nu_e absorption on argon. In the context of a simulated extraction of supernova νe\nu_e spectral parameters from a toy analysis, we investigate the impact of σ(Eν)\sigma(E_\nu) modeling uncertainties on DUNE's supernova neutrino physics sensitivity for the first time. We find that the currently large theoretical uncertainties on σ(Eν)\sigma(E_\nu) must be substantially reduced before the νe\nu_e flux parameters can be extracted reliably: in the absence of external constraints, a measurement of the integrated neutrino luminosity with less than 10\% bias with DUNE requires σ(Eν)\sigma(E_\nu) to be known to about 5%. The neutrino spectral shape parameters can be known to better than 10% for a 20% uncertainty on the cross-section scale, although they will be sensitive to uncertainties on the shape of σ(Eν)\sigma(E_\nu). A direct measurement of low-energy νe\nu_e-argon scattering would be invaluable for improving the theoretical precision to the needed level.Comment: 25 pages, 21 figure

    Measurement of Energy Correlators inside Jets and Determination of the Strong Coupling Formula Presented

    Get PDF
    Energy correlators that describe energy-weighted distances between two or three particles in a hadronic jet are measured using an event sample of s\sqrt{s}=13 TeV proton-proton collisions collected by the CMS experiment and corresponding to an integrated luminosity of 36.3 fb1^{−1}. The measured distributions are consistent with the trends in the simulation that reveal two key features of the strong interaction: confinement and asymptotic freedom. By comparing the ratio of the measured three- and two-particle energy correlator distributions with theoretical calculations that resum collinear emissions at approximate next-to-next-to-leading-logarithmic accuracy matched to a next-to-leading-order calculation, the strong coupling is determined at the Z boson mass: αS_S (mZ_Z)=0.1229 0.00400.0050\frac{0.0040}{-0.0050} , the most precise αS_SmZ_Z value obtained using jet substructure observable

    Observation of four top quark production in proton-proton collisions at √s = 13 TeV

    Get PDF

    Search for an exotic decay of the Higgs boson into a Z boson and a pseudoscalar particle in proton-proton collisions at √s = 13 TeV

    Get PDF

    Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service

    Get PDF
    Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors

    Performance of the CMS high-level trigger during LHC Run 2

    Get PDF
    The CERN LHC provided proton and heavy ion collisions during its Run 2 operation period from 2015 to 2018. Proton-proton collisions reached a peak instantaneous luminosity of 2.1 × 1034 cm−2s−1, twice the initial design value, at √ = 13 TeV . The CMS experiment records a subset of the collisions for further processing as part of its online selection of data for physics analyses, using a two-level trigger system: the Level-1 trigger, implemented in custom-designed electronics, and the high-level trigger, a streamlined version of the offline reconstruction software running on a large computer farm. This paper presents the performance of the CMS high-level trigger system during LHC Run 2 for physics objects, such as leptons, jets, and missing transverse momentum, which meet the broad needs of the CMS physics program and the challenge of the evolving LHC and detector conditions. Sophisticated algorithms that were originally used in offline reconstruction were deployed online. Highlights include a machine-learning b tagging algorithm and a reconstruction algorithm for tau leptons that decay hadronically
    corecore