2,861 research outputs found

    On Repetitive Scenario Design

    Get PDF
    Repetitive Scenario Design (RSD) is a randomized approach to robust design based on iterating two phases: a standard scenario design phase that uses NN scenarios (design samples), followed by randomized feasibility phase that uses NoN_o test samples on the scenario solution. We give a full and exact probabilistic characterization of the number of iterations required by the RSD approach for returning a solution, as a function of NN, NoN_o, and of the desired levels of probabilistic robustness in the solution. This novel approach broadens the applicability of the scenario technology, since the user is now presented with a clear tradeoff between the number NN of design samples and the ensuing expected number of repetitions required by the RSD algorithm. The plain (one-shot) scenario design becomes just one of the possibilities, sitting at one extreme of the tradeoff curve, in which one insists in finding a solution in a single repetition: this comes at the cost of possibly high NN. Other possibilities along the tradeoff curve use lower NN values, but possibly require more than one repetition

    Direct Data-Driven Portfolio Optimization with Guaranteed Shortfall Probability

    Get PDF
    This paper proposes a novel methodology for optimal allocation of a portfolio of risky financial assets. Most existing methods that aim at compromising between portfolio performance (e.g., expected return) and its risk (e.g., volatility or shortfall probability) need some statistical model of the asset returns. This means that: ({\em i}) one needs to make rather strong assumptions on the market for eliciting a return distribution, and ({\em ii}) the parameters of this distribution need be somehow estimated, which is quite a critical aspect, since optimal portfolios will then depend on the way parameters are estimated. Here we propose instead a direct, data-driven, route to portfolio optimization that avoids both of the mentioned issues: the optimal portfolios are computed directly from historical data, by solving a sequence of convex optimization problems (typically, linear programs). Much more importantly, the resulting portfolios are theoretically backed by a guarantee that their expected shortfall is no larger than an a-priori assigned level. This result is here obtained assuming efficiency of the market, under no hypotheses on the shape of the joint distribution of the asset returns, which can remain unknown and need not be estimate

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example

    Lagrangian Duality in 3D SLAM: Verification Techniques and Optimal Solutions

    Get PDF
    State-of-the-art techniques for simultaneous localization and mapping (SLAM) employ iterative nonlinear optimization methods to compute an estimate for robot poses. While these techniques often work well in practice, they do not provide guarantees on the quality of the estimate. This paper shows that Lagrangian duality is a powerful tool to assess the quality of a given candidate solution. Our contribution is threefold. First, we discuss a revised formulation of the SLAM inference problem. We show that this formulation is probabilistically grounded and has the advantage of leading to an optimization problem with quadratic objective. The second contribution is the derivation of the corresponding Lagrangian dual problem. The SLAM dual problem is a (convex) semidefinite program, which can be solved reliably and globally by off-the-shelf solvers. The third contribution is to discuss the relation between the original SLAM problem and its dual. We show that from the dual problem, one can evaluate the quality (i.e., the suboptimality gap) of a candidate SLAM solution, and ultimately provide a certificate of optimality. Moreover, when the duality gap is zero, one can compute a guaranteed optimal SLAM solution from the dual problem, circumventing non-convex optimization. We present extensive (real and simulated) experiments supporting our claims and discuss practical relevance and open problems.Comment: 10 pages, 4 figure

    A guaranteed-convergence framework for passivity enforcement of linear macromodels

    Get PDF
    Passivity enforcement is a key step in the extraction of linear macromodels of electrical interconnects and packages for Signal and Power Integrity applications. Most state-of-the-art techniques for passivity enforcement are based on suboptimal or approximate formulations that do not guarantee convergence. We introduce in this paper a new rigorous framework that casts passivity enforcement as a convex non-smooth optimization problem. Thanks to convexity, we are able to prove convergence to the optimal solution within a finite number of steps. The effectiveness of this approach is demonstrated through various numerical example

    Subgradient Techniques for Passivity Enforcement of Linear Device and Interconnect Macromodels

    Get PDF
    This paper presents a class of nonsmooth convex optimization methods for the passivity enforcement of reduced-order macromodels of electrical interconnects, packages, and linear passive devices. Model passivity can be lost during model extraction or identification from numerical field solutions or direct measurements. Nonpassive models may cause instabilities in transient system-level simulation, therefore a suitable postprocessing is necessary in order to eliminate any passivity violations. Different from leading numerical schemes on the subject, passivity enforcement is formulated here as a direct frequency-domain calHinfty{{cal H}_infty} norm minimization through perturbation of the model state-space parameters. Since the dependence of this norm on the parameters is nonsmooth, but continuous and convex, we resort to the use of subdifferentials and subgradients, which are used to devise two different algorithms. We provide a theoretical proof of the global optimality for the solution computed via both schemes. Numerical results confirm that these algorithms achieve the global optimum in a finite number of iterations within a prescribed accuracy leve
    corecore