569 research outputs found

    Quantitative Stability of Variational Systems: I. The Epigraphical Distance

    Get PDF
    This paper proposes a global measure for the distance between the elements of a variational system (parametrized families of optimization problems)

    Quantitative Stability of Variational Systems: II. A Framework for Nonlinear Conditioning

    Get PDF
    It is shown that for well-conditioned problems (local) optima are holderian with respect to the epi-distance

    Approximation and Convergence in Nonlinear Optimization

    Get PDF
    We show that the theory of e-convergence, originally developed to study approximation techniques, is also useful in the analysis of the convergence properties of algorithmic procedures for nonlinear optimization problems

    A Convergence of Bivariate Functions aimed at the Convergence of Saddle Functions

    Get PDF
    Epi/hypo-convergence is introduced from a variational viewpoint. The known topological properties are reviewed and extended. Finally, it is shown that the (partial) Legendre-Fenchel transform is bicontinuous with respect to the topology induced by epi/hypoconvergence on the space of convex-concave bivariate functions

    Damage as Gamma-limit of microfractures in anti-plane linearized elasticity

    Get PDF
    A homogenization result is given for a material having brittle inclusions arranged in a periodic structure. <br/> According to the relation between the softness parameter and the size of the microstructure, three different limit models are deduced via Gamma-convergence. <br/> In particular, damage is obtained as limit of periodically distributed microfractures

    Pointwise Sum of two Maximal Monotone Operators

    Get PDF
    ∗ Cette recherche a été partiellement subventionnée, en ce qui concerne le premier et le dernier auteur, par la bourse OTAN CRG 960360 et pour le second auteur par l’Action Intégrée 95/0849 entre les universités de Marrakech, Rabat et Montpellier.The primary goal of this paper is to shed some light on the maximality of the pointwise sum of two maximal monotone operators. The interesting purpose is to extend some recent results of Attouch, Moudafi and Riahi on the graph-convergence of maximal monotone operators to the more general setting of reflexive Banach spaces. In addition, we present some conditions which imply the uniform Brézis-Crandall-Pazy condition. Afterwards, we present, as a consequence, some recent conditions which ensure the Mosco-epiconvergence of the sum of convex proper lower semicontinuous functions

    Closedness type regularity conditions for surjectivity results involving the sum of two maximal monotone operators

    Full text link
    In this note we provide regularity conditions of closedness type which guarantee some surjectivity results concerning the sum of two maximal monotone operators by using representative functions. The first regularity condition we give guarantees the surjectivity of the monotone operator S(+p)+T()S(\cdot + p)+T(\cdot), where pXp\in X and SS and TT are maximal monotone operators on the reflexive Banach space XX. Then, this is used to obtain sufficient conditions for the surjectivity of S+TS+T and for the situation when 00 belongs to the range of S+TS+T. Several special cases are discussed, some of them delivering interesting byproducts.Comment: 11 pages, no figure

    Elastic-Net Regularization: Error estimates and Active Set Methods

    Full text link
    This paper investigates theoretical properties and efficient numerical algorithms for the so-called elastic-net regularization originating from statistics, which enforces simultaneously l^1 and l^2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates for both a priori and a posteriori parameter choice rules are established. Two iterative numerical algorithms of active set type are proposed, and their convergence properties are discussed. Numerical results are presented to illustrate the features of the functional and algorithms

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization
    corecore