78 research outputs found

    Comparison of Queueing Data-Structures for Kinetic Monte Carlo Simulations of Heterogeneous Catalysts

    Get PDF
    On-lattice Kinetic Monte Carlo (KMC) is a computational method used to simulate (among others) physico-chemical processes on catalytic surfaces. The KMC algorithm propagates the system through discrete configurations by selecting (with the use of random numbers) the next elementary process to be simulated, e.g. adsorption, desorption, diffusion or reaction. An implementation of such a selection procedure is the first-reaction method in which all realizable elementary processes are identified and assigned a random occurrence time based on their rate constant. The next event to be executed will then be the one with the minimum inter-arrival time. Thus, a fast and efficient algorithm for selecting the most imminent process and performing all the necessary updates on the list of realizable processes post-execution, is of great importance. In the current work, we implement five data-structures to handle the elementary process queue during a KMC run: an unsorted list, a binary heap, a pairing heap, a 1-way skip list, and finally, a novel 2-way skip list with a mapping array specialized for KMC simulations. We also investigate the effect of compiler optimizations on the performance of these data-structures on three benchmark models, capturing CO-oxidation, a simplified water-gas shift mechanism, and a temperature programmed desorption run. Excluding the least efficient and impractical for large problems unsorted list, we observe a 3× speedup of the binary or pairing heaps (most efficient) compared to the 1-way skip list (least efficient). Compiler optimizations deliver a speedup of up to 1.8×. These benchmarks provide valuable insight on the importance of, often-overlooked, implementation-related aspects of KMC simulations, such as the queueing data-structures. Our results could be particularly useful in guiding the choice of data-structures and algorithms that would minimize the computational cost of large-scale simulations

    Exact distributed kinetic Monte Carlo simulations for on-lattice chemical kinetics: lessons learnt from medium- and large-scale benchmarks

    Get PDF
    Kinetic Monte-Carlo (KMC) simulations have been instrumental in multiscale catalysis studies, enabling the elucidation of the complex dynamics of heterogeneous catalysts and the prediction of macroscopic performance metrics, such as activity and selectivity. However, the accessible length- and time-scales have been a limiting factor in such simulations. For instance, handling lattices containing millions of sites with “traditional” sequential KMC implementations is prohibitive owing to large memory requirements and long simulation times. We have recently established an approach for exact, distributed, lattice-based simulations of catalytic kinetics which couples the Time-Warp algorithm with the Graph-Theoretical KMC framework, enabling the handling of complex adsorbate lateral interactions and reaction events within large lattices. In this work, we develop a lattice-based variant of the Brusselator system, a prototype chemical oscillator pioneered by Prigogine and Lefever in the late 60’s, to benchmark and demonstrate our approach. This system can form spiral wave patterns, which would be computationally intractable with sequential KMC, while our distributed KMC approach can simulate such patterns 16 and 36 times faster with 625 and 1600 processors, respectively. The medium- and large-scale benchmarks thus conducted, demonstrate the robustness of the approach, and reveal computational bottlenecks that could be targeted in further development efforts

    Coupling the time-warp algorithm with the graph-theoretical kinetic Monte Carlo framework for distributed simulations of heterogeneous catalysts

    Get PDF
    Despite the successful and ever widening adoption of kinetic Monte Carlo (KMC) simulations in the area of surface science and heterogeneous catalysis, the accessible length scales are still limited by the inherently sequential nature of the KMC framework. Simulating long-range surface phenomena, such as catalytic reconstruction and pattern formation, requires consideration of large surfaces/lattices, at the μm scale and beyond. However, handling such lattices with the sequential KMC framework is extremely challenging due to the heavy memory footprint and computational demand. The Time-Warp algorithm proposed by Jefferson [ACM. Trans. Program. Lang. Syst., 1985. 7: 404-425] offers a way to enable distributed parallelization of discrete event simulations. Thus, to enable high-fidelity simulations of challenging systems in heterogeneous catalysis, we have coupled the Time-Warp algorithm with the Graph-Theoretical KMC framework [J. Chem. Phys., 134(21): 214115; J. Chem. Phys., 139(22): 224706] and implemented the approach in the general-purpose KMC code Zacros. We have further developed a “parallel-emulation” serial algorithm, which produces identical results to those obtained from the distributed runs (with the Time-Warp algorithm) thereby validating the correctness of our implementation. These advancements make Zacros the first-of-its-kind general-purpose KMC code with distributed computing capabilities, thereby opening up opportunities for detailed meso-scale studies of heterogeneous catalysts and closer-than-ever comparisons of theory with experiments

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Soil surface temperatures reveal moderation of the urban heat island effect by trees and shrubs

    Get PDF
    Urban areas are major contributors to air pollution and climate change, causing impacts on human health that are amplified by the microclimatological effects of buildings and grey infrastructure through the urban heat island (UHI) effect. Urban greenspaces may be important in reducing surface temperature extremes, but their effects have not been investigated at a city-wide scale. Across a midsized UK city we buried temperature loggers at the surface of greenspace soils at 100 sites, stratified by proximity to city centre, vegetation cover and land-use. Mean daily soil surface temperature over 11 months increased by 0.6 °C over the 5 km from the city outskirts to the centre. Trees and shrubs in non-domestic greenspace reduced mean maximum daily soil surface temperatures in the summer by 5.7 °C compared to herbaceous vegetation, but tended to maintain slightly higher temperatures in winter. Trees in domestic gardens, which tend to be smaller, were less effective at reducing summer soil surface temperatures. Our findings reveal that the UHI effects soil temperatures at a city-wide scale, and that in their moderating urban soil surface temperature extremes, trees and shrubs may help to reduce the adverse impacts of urbanization on microclimate, soil processes and human health

    Amyloid imaging in the differential diagnosis of dementia: review and potential clinical applications

    Get PDF
    In the past decade, positron emission tomography (PET) with carbon-11-labeled Pittsburgh Compound B (PIB) has revolutionized the neuroimaging of aging and dementia by enabling in vivo detection of amyloid plaques, a core pathologic feature of Alzheimer's disease (AD). Studies suggest that PIB-PET is sensitive for AD pathology, can distinguish AD from non-AD dementia (for example, frontotemporal lobar degeneration), and can help determine whether mild cognitive impairment is due to AD. Although the short half-life of the carbon-11 radiolabel has thus far limited the use of PIB to research, a second generation of tracers labeled with fluorine-18 has made it possible for amyloid PET to enter the clinical era. In the present review, we summarize the literature on amyloid imaging in a range of neurodegenerative conditions. We focus on potential clinical applications of amyloid PET and its role in the differential diagnosis of dementia. We suggest that amyloid imaging will be particularly useful in the evaluation of mildly affected, clinically atypical or early age-at-onset patients, and illustrate this with case vignettes from our practice. We emphasize that amyloid imaging should supplement (not replace) a detailed clinical evaluation. We caution against screening asymptomatic individuals, and discuss the limited positive predictive value in older populations. Finally, we review limitations and unresolved questions related to this exciting new technique

    A century of trends in adult human height

    Get PDF

    The CMS Statistical Analysis and Combination Tool: Combine

    Get PDF
    Metrics: https://link.springer.com/article/10.1007/s41781-024-00121-4/metricsThis paper describes the Combine software package used for statistical analyses by the CMS Collaboration. The package, originally designed to perform searches for a Higgs boson and the combined analysis of those searches, has evolved to become the statistical analysis tool presently used in the majority of measurements and searches performed by the CMS Collaboration. It is not specific to the CMS experiment, and this paper is intended to serve as a reference for users outside of the CMS Collaboration, providing an outline of the most salient features and capabilities. Readers are provided with the possibility to run Combine and reproduce examples provided in this paper using a publicly available container image. Since the package is constantly evolving to meet the demands of ever-increasing data sets and analysis sophistication, this paper cannot cover all details of Combine. However, the online documentation referenced within this paper provides an up-to-date and complete user guide.CERN (European Organization for Nuclear Research)STFC (United Kingdom)Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract Nos. 675440, 724704, 752730, 758316, 765710, 824093, 101115353, 101002207, and COST Action CA16108 (European Union); the Leventis Foundation; the Alfred P. Sloan Foundatio

    Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service

    Get PDF
    A preprint version of the article is available at: arXiv:2402.15366v2 [physics.ins-det], https://arxiv.org/abs/2402.15366 . Comments: Replaced with the published version. Added the journal reference and the DOI. All the figures and tables can be found at https://cms-results.web.cern.ch/cms-results/public-results/publications/MLG-23-001 (CMS Public Pages). Report numbers: CMS-MLG-23-001, CERN-EP-2023-303.Data Availability: No datasets were generated or analyzed during the current study.Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.SCOAP3. Open access funding provided by CERN (European Organization for Nuclear Research

    Performance of the ATLAS Trigger System in 2010

    Get PDF
    Proton-proton collisions at sqrt{s} = 7 TeV and heavy ion collisions at sqrt{s_NN} = 2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presente
    corecore