456 research outputs found

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    PROCESSING STATIONARY NOISE: MODEL AND PARAMETER SELECTION IN VARIATIONAL METHODS.

    Get PDF
    International audienceAdditive or multiplicative stationary noise recently became an important issue in applied fields such as microscopy or satellite imaging. Relatively few works address the design of dedicated denoising methods compared to the usual white noise setting. We recently proposed a variational algorithm to address this issue. In this paper, we analyze this problem from a statistical point of view and then provide deterministic properties of variational formulations. In the first part of this work, we demonstrate that in many practical problems, the noise can be assimilated to a colored Gaussian noise. We provide a quantitative measure of the distance between a stationary process and the corresponding Gaussian process. In the second part, we focus on the Gaussian setting and analyze denoising methods which consist of minimizing the sum of a total variation term and an l2 data fidelity term. While the constrained formulation of this problem allows to easily tune the parameters, the Lagrangian formulation can be solved more efficiently since the problem is strongly convex. Our second contribution consists in providing analytical values of the regularization parameter in order to approximately satisfy Morozov's discrepancy principle

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization

    Templates for Convex Cone Problems with Applications to Sparse Signal Recovery

    Full text link
    This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This version has updated reference

    Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    Get PDF
    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented.Comment: Resubmitted to Physics in Medicine and Biology. Text has been modified according to referee comments, and typos in the equations have been correcte

    HIPAD - A Hybrid Interior-Point Alternating Direction algorithm for knowledge-based SVM and feature selection

    Full text link
    We consider classification tasks in the regime of scarce labeled training data in high dimensional feature space, where specific expert knowledge is also available. We propose a new hybrid optimization algorithm that solves the elastic-net support vector machine (SVM) through an alternating direction method of multipliers in the first phase, followed by an interior-point method for the classical SVM in the second phase. Both SVM formulations are adapted to knowledge incorporation. Our proposed algorithm addresses the challenges of automatic feature selection, high optimization accuracy, and algorithmic flexibility for taking advantage of prior knowledge. We demonstrate the effectiveness and efficiency of our algorithm and compare it with existing methods on a collection of synthetic and real-world data.Comment: Proceedings of 8th Learning and Intelligent OptimizatioN (LION8) Conference, 201

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    NMR investigations of the interaction between the azo-dye sunset yellow and Fluorophenol

    Get PDF
    The interaction of small molecules with larger noncovalent assemblies is important across a wide range of disciplines. Here, we apply two complementary NMR spectroscopic methods to investigate the interaction of various fluorophenol isomers with sunset yellow. This latter molecule is known to form noncovalent aggregates in isotropic solution, and form liquid crystals at high concentrations. We utilize the unique fluorine-19 nucleus of the fluorophenol as a reporter of the interactions via changes in both the observed chemical shift and diffusion coefficients. The data are interpreted in terms of the indefinite self-association model and simple modifications for the incorporation of a second species into an assembly. A change in association mode is tentatively assigned whereby the fluorophenol binds end-on with the sunset yellow aggregates at low concentration and inserts into the stacks at higher concentrations

    Compressed sensing imaging techniques for radio interferometry

    Get PDF
    Radio interferometry probes astrophysical signals through incomplete and noisy Fourier measurements. The theory of compressed sensing demonstrates that such measurements may actually suffice for accurate reconstruction of sparse or compressible signals. We propose new generic imaging techniques based on convex optimization for global minimization problems defined in this context. The versatility of the framework notably allows introduction of specific prior information on the signals, which offers the possibility of significant improvements of reconstruction relative to the standard local matching pursuit algorithm CLEAN used in radio astronomy. We illustrate the potential of the approach by studying reconstruction performances on simulations of two different kinds of signals observed with very generic interferometric configurations. The first kind is an intensity field of compact astrophysical objects. The second kind is the imprint of cosmic strings in the temperature field of the cosmic microwave background radiation, of particular interest for cosmology.Comment: 10 pages, 1 figure. Version 2 matches version accepted for publication in MNRAS. Changes includes: writing corrections, clarifications of arguments, figure update, and a new subsection 4.1 commenting on the exact compliance of radio interferometric measurements with compressed sensin
    corecore