1,222 research outputs found

    qCLUE: a quantum clustering algorithm for multi-dimensional datasets

    Get PDF
    Clustering algorithms are at the basis of several technological applications, and are fueling the development of rapidly evolving fields such as machine learning. In the recent past, however, it has become apparent that they face challenges stemming from datasets that span more spatial dimensions. In fact, the best-performing clustering algorithms scale linearly in the number of points, but quadratically with respect to the local density of points. In this work, we introduce qCLUE, a quantum clustering algorithm that scales linearly in both the number of points and their density. qCLUE is inspired by CLUE, an algorithm developed to address the challenging time and memory budgets of Event Reconstruction (ER) in future High-Energy Physics experiments. As such, qCLUE marries decades of development with the quadratic speedup provided by quantum computers. We numerically test qCLUE in several scenarios, demonstrating its effectiveness and proving it to be a promising route to handle complex data analysis tasks – especially in high-dimensional datasets with high densities of points

    qLUE: A Quantum Clustering Algorithm for Multi- Dimensional Datasets

    Full text link
    Clustering algorithms are at the basis of several technological applications, and are fueling the development of rapidly evolving fields such as machine learning. In the recent past, however, it has become apparent that they face challenges stemming from datasets that span more spatial dimensions. In fact, the best-performing clustering algorithms scale linearly in the number of points, but quadratically with respect to the local density of points. In this work, we introduce qLUE, a quantum clustering algorithm that scales linearly in both the number of points and their density. qLUE is inspired by CLUE, an algorithm developed to address the challenging time and memory budgets of Event Reconstruction (ER) in future High-Energy Physics experiments. As such, qLUE marries decades of development with the quadratic speedup provided by quantum computers. We numerically test qLUE in several scenarios, demonstrating its effectiveness and proving it to be a promising route to handle complex data analysis tasks -- especially in high-dimensional datasets with high densities of points

    Evaluating Performance Portability with the CMS Heterogeneous Pixel Reconstruction code

    Get PDF
    In the past years the landscape of tools for expressing parallel algorithms in a portable way across various compute accelerators has continued to evolve significantly. There are many technologies on the market that provide portability between CPU, GPUs from several vendors, and in some cases even FPGAs. These technologies include C++ libraries such as Alpaka and Kokkos, compiler directives such as OpenMP, the SYCL open specification that can be implemented as a library or in a compiler, and standard C++ where the compiler is solely responsible for the offloading. Given this developing landscape, users have to choose the technology that best fits their applications and constraints. For example, in the CMS experiment the experience so far in heterogeneous reconstruction algorithms suggests that the full application contains a large number of relatively short computational kernels and memory transfer operations. In this work we use a stand-alone version of the CMS heterogeneous pixel reconstruction code as a realistic use case of HEP reconstruction software that is capable of leveraging GPUs effectively. We summarize the experience of porting this code base from CUDA to Alpaka, Kokkos, SYCL, std::par, and OpenMP offloading. We compare the event processing throughput achieved by each version on NVIDIA and AMD GPUs as well as on a CPU, and compare those to what a native version of the code achieves on each platform

    Determination of the strong coupling and its running from measurements of inclusive jet production

    Get PDF
    The value of the strong coupling S is determined in a comprehensive analysis at next-to-next-to-leading order accuracy in quantum chromodynamics. The analysis uses double-differential cross section measurements from the CMS Collaboration at the CERN LHC of inclusive jet production in proton-proton collisions at centre-of- mass energies of 2.76, 7, 8, and 13 TeV, combined with inclusive deep-inelastic data from HERA. The value S_S(Z_Z ) = 0.1176 0.0016+0.0014^{+0.0014}_{-0.0016} is obtained at the scale of the Z boson mass. By using the measurements in different intervals of jet transverse momentum, the running of S_S is probed for energies between 100 and 1600 GeV

    Observation of γγ → ττ in proton-proton collisions and limits on the anomalous electromagnetic moments of the τ lepton

    Get PDF
    The production of a pair of τ leptons via photon–photon fusion, γγ → ττ, is observed for the f irst time in proton–proton collisions, with a significance of 5.3 standard deviations. This observation is based on a data set recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 138 fb−1. Events with a pair of τ leptons produced via photon–photon fusion are selected by requiring them to be back-to-back in the azimuthal direction and to have a minimum number of charged hadrons associated with their production vertex. The τ leptons are reconstructed in their leptonic and hadronic decay modes. The measured fiducial cross section of γγ → ττ is σfid obs = 12.4+3.8 −3.1 fb. Constraints are set on the contributions to the anomalous magnetic moment (aτ) and electric dipole moments (dτ) of the τ lepton originating from potential effects of new physics on the γττ vertex: aτ = 0.0009+0.0032 −0.0031 and |dτ| < 2.9×10−17ecm (95% confidence level), consistent with the standard model

    Use of time information in the High Granularity Calorimeter at the CMS experiment

    No full text
    The High-Luminosity phase of the Large Hadron Collider (HL-LHC) starting in 2029 poses unprecedented challenges in terms of data acquisition and event reconstruction. Significant upgrades are planned for both detectors and software to tackle these challenges. Among the strategies adopted by the Compact Muon Solenoid (CMS) experiment there is the incorporation of time-related information from sub-detectors, facilitated by advancements in technology and faster electronics. The forthcoming High Granularity Calorimeter (HGCAL) is set to replace the current electromagnetic and hadronic calorimeters in the Endcaps. Apart from its exceptional spatial resolution, HGCAL will introduce precise time measurements for high-energy deposits, allowing for a comprehensive 5D reconstruction (x, y, z, t, E) of particle showers. The front-end electronics will measure the time of arrival of pulses above a charge threshold, achieving a resolution as fine as 25 ps for high individual energy deposits. This research highlights the integration of timing information from the High Granularity Calorimeter into event reconstruction and its use in combination with the information coming from the dedicated timing layer, the MIP Timing Detector, heading towards an enhanced global event interpretation in the high pileup environment of the HL-LHC

    qLUE: A Quantum Clustering Algorithm for Multi-Dimensional Datasets

    No full text
    Clustering algorithms are at the basis of several technological applications, and are fueling the development of rapidly evolving fields such as machine learning. In the recent past, however, it has become apparent that they face challenges stemming from datasets that span more spatial dimensions. In fact, the best-performing clustering algorithms scale linearly in the number of points, but quadratically with respect to the local density of points. In this work, we introduce qLUE, a quantum clustering algorithm that scales linearly in both the number of points and their density. qLUE is inspired by CLUE, an algorithm developed to address the challenging time and memory budgets of Event Reconstruction (ER) in future High-Energy Physics experiments. As such, qLUE marries decades of development with the quadratic speedup provided by quantum computers. We numerically test qLUE in several scenarios, demonstrating its effectiveness and proving it to be a promising route to handle complex data analysis tasks -- especially in high-dimensional datasets with high densities of points

    Performance portability for the CMS Reconstruction with Alpaka

    No full text
    Abstract For CMS, Heterogeneous Computing is a powerful tool to face the computational challenges posed by the upgrades of the LHC, and will be used in production at the High Level Trigger during Run 3. In principle, to offload the computational work on non-CPU resources, while retaining their performance, different implementations of the same code are required. This would introduce code-duplication which is not sustainable in terms of maintainability and testability of the software. Performance portability libraries allow to write code once and run it on different architectures with close-to-native performance. The CMS experiment is evaluating performance portability libraries for the near term future.</jats:p

    Performance portability for the CMS Reconstruction with Alpaka

    No full text
    For CMS, Heterogeneous Computing is a powerful tool to face the computational challenges posed by the upgrades of the LHC, and will be used in production at the High Level Trigger during Run 3. In principle, to offload the computational work on non-CPU resources, while retaining their performance, different implementations of the same code are required. This would introduce code-duplication which is not sustainable in terms of maintainability and testability of the software. Performance portability libraries allow to write code once and run it on different architectures with close-to-native performance. The CMS experiment is evaluating performance portability libraries for the near term future
    corecore