471 research outputs found

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    Noisy Optimization: Convergence with a Fixed Number of Resamplings

    Get PDF
    It is known that evolution strategies in continuous domains might not converge in the presence of noise. It is also known that, under mild assumptions, and using an increasing number of resamplings, one can mitigate the effect of additive noise and recover convergence. We show new sufficient conditions for the convergence of an evolutionary algorithm with constant number of resamplings; in particular, we get fast rates (log-linear convergence) provided that the variance decreases around the optimum slightly faster than in the so-called multiplicative noise model. Keywords: Noisy optimization, evolutionary algorithm, theory.Comment: EvoStar (2014

    Analysis of Different Types of Regret in Continuous Noisy Optimization

    Get PDF
    The performance measure of an algorithm is a crucial part of its analysis. The performance can be determined by the study on the convergence rate of the algorithm in question. It is necessary to study some (hopefully convergent) sequence that will measure how "good" is the approximated optimum compared to the real optimum. The concept of Regret is widely used in the bandit literature for assessing the performance of an algorithm. The same concept is also used in the framework of optimization algorithms, sometimes under other names or without a specific name. And the numerical evaluation of convergence rate of noisy algorithms often involves approximations of regrets. We discuss here two types of approximations of Simple Regret used in practice for the evaluation of algorithms for noisy optimization. We use specific algorithms of different nature and the noisy sphere function to show the following results. The approximation of Simple Regret, termed here Approximate Simple Regret, used in some optimization testbeds, fails to estimate the Simple Regret convergence rate. We also discuss a recent new approximation of Simple Regret, that we term Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016, Denver, United States. 201

    Annealing schedule from population dynamics

    Full text link
    We introduce a dynamical annealing schedule for population-based optimization algorithms with mutation. On the basis of a statistical mechanics formulation of the population dynamics, the mutation rate adapts to a value maximizing expected rewards at each time step. Thereby, the mutation rate is eliminated as a free parameter from the algorithm.Comment: 6 pages RevTeX, 4 figures PostScript; to be published in Phys. Rev.

    Analysis of the Hydrogen-rich Magnetic White Dwarfs in the SDSS

    Full text link
    We have calculated optical spectra of hydrogen-rich (DA) white dwarfs with magnetic field strengths between 1 MG and 1000 MG for temperatures between 7000 K and 50000 K. Through a least-squares minimization scheme with an evolutionary algorithm, we have analyzed the spectra of 114 magnetic DAs from the SDSS (95 previously published plus 14 newly discovered within SDSS, and five discovered by SEGUE). Since we were limited to a single spectrum for each object we used only centered magnetic dipoles or dipoles which were shifted along the magnetic dipole axis. We also statistically investigated the distribution of magnetic-field strengths and geometries of our sample.Comment: to appear in the proceedings of the 16th European Workshop on White Dwarfs, Barcelona, 200

    Optimizing the Stark-decelerator beamline for the trapping of cold molecules using evolutionary strategies

    Get PDF
    We demonstrate feedback control optimization for the Stark deceleration and trapping of neutral polar molecules using evolutionary strategies. In a Stark-decelerator beamline pulsed electric fields are used to decelerate OH radicals and subsequently store them in an electrostatic trap. The efficiency of the deceleration and trapping process is determined by the exact timings of the applied electric field pulses. Automated optimization of these timings yields an increase of 40 % of the number of trapped OH radicals.Comment: 7 pages, 4 figures (RevTeX) (v2) minor corrections (v3) no changes to manuscript, but fix author list in arXiv abstrac

    Algorithms (X,sigma,eta) : quasi-random mutations for Evolution Strategies

    Get PDF
    International audienceRandomization is an efficient tool for global optimization. We here define a method which keeps : - the order 0 of evolutionary algorithms (no gradient) ; - the stochastic aspect of evolutionary algorithms ; - the efficiency of so-called "low-dispersion" points ; and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation ; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization ; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results. Algorithm XSE: quasi-random mutations for evolution strategies. A. Auger, M. Jebalia, O. Teytaud. Proceedings of EA'2005

    Attraction and diffusion in nature-inspired optimization algorithms

    Get PDF
    Nature-inspired algorithms usually use some form of attraction and diffusion as a mechanism for exploitation and exploration. In this paper, we investigate the role of attraction and diffusion in algorithms and their ways in controlling the behaviour and performance of nature-inspired algorithms. We highlight different ways of the implementations of attraction in algorithms such as the firefly algorithm, charged system search, and the gravitational search algorithm. We also analyze diffusion mechanisms such as random walks for exploration in algorithms. It is clear that attraction can be an effective way for enhancing exploitation, while diffusion is a common way for exploration. Furthermore, we also discuss the role of parameter tuning and parameter control in modern metaheuristic algorithms, and then point out some key topics for further research

    On the singular behaviour of scattering amplitudes in quantum field theory

    Get PDF
    We analyse the singular behaviour of one-loop integrals and scattering amplitudes in the framework of the loop-tree duality approach. We show that there is a partial cancellation of singularities at the loop integrand level among the different components of the corresponding dual representation that can be interpreted in terms of causality. The remaining threshold and infrared singularities are restricted to a finite region of the loop momentum space, which is of the size of the external momenta and can be mapped to the phase-space of real corrections to cancel the soft and collinear divergences

    Dynamical Models in Quantitative Genetics

    Get PDF
    In this paper the author investigates models in quantitative genetics and shows that under quite reasonable assumptions the dynamics can display rather counter-intuitive behavior. This research was conducted as part of the Dynamics of Macrosystems Feasibility Study in the System and Decision Sciences Program
    corecore