18 research outputs found

    Multi-objective and multi-fidelity Bayesian optimization of laser-plasma acceleration

    Full text link
    Beam parameter optimization in accelerators involves multiple, sometimes competing objectives. Condensing these multiple objectives into a single objective unavoidably results in bias towards particular outcomes that do not necessarily represent the best possible outcome for the operator in terms of parameter optimization. A more versatile approach is multi-objective optimization, which establishes the trade-off curve or Pareto front between objectives. Here we present first results on multi-objective Bayesian optimization of a simulated laser-plasma accelerator. We find that multi-objective optimization is equal or even superior in performance to its single-objective counterparts, and that it is more resilient to different statistical descriptions of objectives. As a second major result of our paper, we significantly reduce the computational costs of the optimization by choosing the resolution and box size of the simulations dynamically. This is relevant since even with the use of Bayesian statistics, performing such optimizations on a multi-dimensional search space may require hundreds or thousands of simulations. Our algorithm translates information gained from fast, low-resolution runs with lower fidelity to high-resolution data, thus requiring fewer actual simulations at highest computational cost. The techniques demonstrated in this paper can be translated to many different use cases, both computational and experimental

    Leveraging trust for joint multi-objective and multi-fidelity optimization

    Get PDF
    In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encounter prohibitively high costs in multi-dimensional optimizations of one or more objectives. Multi-fidelity approaches offer potential remedies by utilizing multiple, less costly information sources, such as low-resolution approximations in numerical simulations. However, integrating these two strategies presents a significant challenge. We propose the innovative use of a trust metric to facilitate the joint optimization of multiple objectives and data sources. Our methodology introduces a modified multi-objective (MO) optimization policy incorporating the trust gain per evaluation cost as one of the objectives of a Pareto optimization problem. This modification enables simultaneous MOMF optimization, which proves effective in establishing the Pareto set and front at a fraction of the cost. Two specific methods of MOMF optimization are presented and compared: a holistic approach selecting both the input parameters and the fidelity parameter jointly, and a sequential approach for benchmarking. Through benchmarks on synthetic test functions, our novel approach is shown to yield significant cost reductions—up to an order of magnitude compared to pure MO optimization. Furthermore, we find that joint optimization of the trust and objective domains outperforms sequentially addressing them. We validate our findings with the specific use case of optimizing particle-in-cell simulations of laser-plasma acceleration, highlighting the practical potential of our method in the Pareto optimization of highly expensive black-box functions. Implementation of the methods in existing Bayesian optimization frameworks is straightforward, with immediate extensions e.g. to batch optimization possible. Given their ability to handle various continuous or discrete fidelity dimensions, these techniques have wide-ranging applicability in tackling simulation challenges across various scientific computing fields such as plasma physics and fluid dynamics

    Data-driven Science and Machine Learning Methods in Laser-Plasma Physics

    Get PDF
    Laser-plasma physics has developed rapidly over the past few decades as high-power lasers have become both increasingly powerful and more widely available. Early experimental and numerical research in this field was restricted to single-shot experiments with limited parameter exploration. However, recent technological improvements make it possible to gather an increasing amount of data, both in experiments and simulations. This has sparked interest in using advanced techniques from mathematics, statistics and computer science to deal with, and benefit from, big data. At the same time, sophisticated modeling techniques also provide new ways for researchers to effectively deal with situations in which still only sparse amounts of data are available. This paper aims to present an overview of relevant machine learning methods with focus on applicability to laser-plasma physics, including its important sub-fields of laser-plasma acceleration and inertial confinement fusion.</p

    Tango Controls and data pipeline for petawatt laser experiments

    Get PDF
    The Centre for Advanced Laser Applications in Garching, Germany, is home to the ATLAS-3000 multi-petawatt laser, dedicated to research on laser particle acceleration and its applications. A control system based on Tango Controls is implemented for both the laser and four experimental areas. The device server approach features high modularity, which, in addition to the hardware control, enables a quick extension of the system and allows for automated data acquisition of the laser parameters and experimental data for each laser shot. In this paper we present an overview of our implementation of the control system, as well as our advances in terms of experimental operation, online supervision and data processing. We also give an outlook on advanced experimental supervision and online data evaluation – where the data can be processed in a pipeline – which is being developed on the basis of this infrastructure

    Efficacy Trials and Progress of HIV Vaccines

    No full text

    Expected hypervolume improvement for simultaneous multi-objective and multi-fidelity optimization

    Get PDF
    Bayesian optimization has proven to be an efficient method to optimize expensive-to-evaluate systems. However, depending on the cost of single observations, multi-dimensional optimizations of one or more objectives may still be prohibitively expensive. Multi-fidelity optimization remedies this issue by including multiple, cheaper information sources such as low-resolution approximations in numerical simulations. Acquisition functions for multi-fidelity optimization are typically based on exploration-heavy algorithms that are difficult to combine with optimization towards multiple objectives. Here we show that the expected hypervolume improvement policy can act in many situations as a suitable substitute. We incorporate the evaluation cost either via a two-step evaluation or within a single acquisition function with an additional fidelity-related objective. This permits simultaneous multi-objective and multi-fidelity optimization, which allows to accurately establish the Pareto set and front at fractional cost. Benchmarks show a cost reduction of an order of magnitude or more. Our method thus allows for Pareto optimization of extremely expansive black-box functions. The presented methods are simple and straightforward to implement in existing, optimized Bayesian optimization frameworks and can immediately be extended to batch optimization. The techniques can also be used to combine different continuous and/or discrete fidelity dimensions, which makes them particularly relevant for simulation problems in plasma physics, fluid dynamics and many other branches of scientific computing

    SS-Drop: A Novel Message Drop Policy to Enhance Buffer Management in Delay Tolerant Networks

    No full text
    A challenged network is one where traditional hypotheses such as reduced data transfer error rates, end-to-end connectivity, or short transmissions have not gained much significance. A wide range of application scenarios are associated with such networks. Delay tolerant networking (DTN) is an approach that pursues to report the problems which reduce communication in disrupted networks. DTN works on store-carry and forward mechanism in such a way that a message may be stored by a node for a comparatively large amount of time and carry it until a proper forwarding opportunity appears. To store a message for long delays, a proper buffer management scheme is required to select a message for dropping upon buffer overflow. Every time dropping messages lead towards the wastage of valuable resources which the message has already consumed. The proposed solution is a size-based policy which determines an inception size for the selection of message for deletion as buffer becomes overflow. The basic theme behind this scheme is that by determining the exact buffer space requirement, one can easily select a message of an appropriate size to be discarded. By doing so, it can overcome unnecessary message drop and ignores biasness just before selection of specific sized message. The proposed scheme Spontaneous Size Drop (SS-Drop) implies a simple but intelligent mechanism to determine the inception size to drop a message upon overflow of the buffer. After simulation in ONE (Opportunistic Network Environment) simulator, the SS-Drop outperforms the opponent drop policies in terms of high delivery ratio by giving 66.3% delivery probability value and minimizes the overhead ratio up to 41.25%. SS-Drop also showed a prominent reduction in dropping of messages and buffer time average.</jats:p
    corecore