100 research outputs found

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    A Low-Cost Alternating Projection Approach for a Continuous Formulation of Convex and Cardinality Constrained Optimization

    Get PDF
    Funding Information: Open access funding provided by FCT|FCCN (b-on). The first author was financially supported by the Serbian Ministry of Education, Science, and Technological Development and Serbian Academy of Science and Arts, grant no. F10. The second author was financially supported by Fundação para a Ciência e a Tecnologia (FCT) (Portuguese Foundation for Science and Technology) under the scope of the projects UIDB/MAT/00297/2020, UIDP/MAT/00297/2020 (Centro de Matemática e Aplicações), and UI/297/2020-5/2021. The third author was financially supported by Fundação para a Ciência e a Tecnologia (FCT) (Portuguese Foundation for Science and Technology) under the scope of the projects UIDB/MAT/00297/2020, UIDP/MAT/00297/2020 (Centro de Matemática e Aplicações). Publisher Copyright: © 2023, The Author(s).We consider convex constrained optimization problems that also include a cardinality constraint. In general, optimization problems with cardinality constraints are difficult mathematical programs which are usually solved by global techniques from discrete optimization. We assume that the region defined by the convex constraints can be written as the intersection of a finite collection of convex sets, such that it is easy and inexpensive to project onto each one of them (e.g., boxes, hyper-planes, or half-spaces). Taking advantage of a recently developed continuous reformulation that relaxes the cardinality constraint, we propose a specialized penalty gradient projection scheme combined with alternating projection ideas to compute a solution candidate for these problems, i.e., a local (possibly non-global) solution. To illustrate the proposed algorithm, we focus on the standard mean-variance portfolio optimization problem for which we can only invest in a preestablished limited number of assets. For these portfolio problems with cardinality constraints, we present a numerical study on a variety of data sets involving real-world capital market indices from major stock markets. In many cases, we observe that the proposed scheme converges to the global solution. On those data sets, we illustrate the practical performance of the proposed scheme to produce the effective frontiers for different values of the limited number of allowed assets.publishersversionpublishe

    An augmented Lagrangian approach for cardinality constrained minimization applied to variable selection problems

    Get PDF
    The first author was financially supported by the Provincial Secretariat for Higher Education and Scientific Research of Vojvodina , grant number 142-451-2593/2021-01/2 . The second author was financially supported by Fundação para a Ciência e a Tecnologia (FCT) (Portuguese Foundation for Science and Technology) under the scope of the projects UIDB/MAT/00297/2020 , UIDP/MAT/00297/2020 (Centro de Matemática e Aplicações), and UI/297/2020-5/2021 . The third author was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) under the scope of the projects UIDB/MAT/00297/2020 and UIDP/MAT/00297/2020 (Centro de Matemática e Aplicações). Publisher Copyright: © 2023 The Author(s). Published by Elsevier B.V.To solve convex constrained minimization problems, that also include a cardinality constraint, we propose an augmented Lagrangian scheme combined with alternating projection ideas. Optimization problems that involve a cardinality constraint are NP-hard mathematical programs and typically very hard to solve approximately. Our approach takes advantage of a recently developed and analyzed continuous formulation that relaxes the cardinality constraint. Based on that formulation, we solve a sequence of smooth convex constrained minimization problems, for which we use projected gradient-type methods. In our setting, the convex constraint region can be written as the intersection of a finite collection of convex sets that are easy and inexpensive to project. We apply our approach to a variety of over and under determined constrained linear least-squares problems, with both synthetic and real data that arise in variable selection, and demonstrate its effectiveness.publishersversionpublishe

    Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization

    Get PDF
    Funding Information: Open access funding provided by FCT|FCCN (b-on). The first and second authors are funded by national funds through FCT - Fundação para a Ciência e a Tecnologia I.P., under the scope of projects PTDC/MAT-APL/28400/2017, UIDP/MAT/00297/2020, and UIDB/MAT/00297/2020 (Center for Mathematics and Applications). The third author is funded by national funds through the FCT - Fundação para a Ciência e a Tecnologia, I.P., under the scope of the projects CEECIND/02211/2017, UIDP/MAT/00297/2020, and UIDB/MAT/00297/2020 (Center for Mathematics and Applications). Publisher Copyright: © 2023, The Author(s).We present a derivative-free separable quadratic modeling and cubic regularization technique for solving smooth unconstrained minimization problems. The derivative-free approach is mainly concerned with building a quadratic model that could be generated by numerical interpolation or using a minimum Frobenius norm approach, when the number of points available does not allow to build a complete quadratic model. This model plays a key role to generate an approximated gradient vector and Hessian matrix of the objective function at every iteration. We add a specialized cubic regularization strategy to minimize the quadratic model at each iteration, that makes use of separability. We discuss convergence results, including worst case complexity, of the proposed schemes to first-order stationary points. Some preliminary numerical results are presented to illustrate the robustness of the specialized separable cubic algorithm.publishersversionepub_ahead_of_prin

    Nonmonotone spectral projected gradient methods on convex sets

    Get PDF
    Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo-Lampariello-Lucidi nonmonotone line search. In particular, the nonmonotone strategy is combined with the spectral gradient choice of steplength to accelerate the convergence process. In addition to the classical projected gradient nonlinear path, the feasible spectral projected gradient is used as a search direction to avoid additional trial projections during the one-dimensional search process. Convergence properties and extensive numerical results are presented.1041196121

    Nonmonotone Barzilai-Borwein Gradient Algorithm for 1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth 1\ell_1-regularized term. This problem as a special cases includes the 1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the 1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the 1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive 1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for 1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    Quasi-Newton-Based Preconditioning and Damped Quasi-Newton Schemes for Nonlinear Conjugate Gradient Methods

    Get PDF
    In this paper, we deal with matrix-free preconditioners for Nonlinear Conjugate Gradient (NCG) methods. In particular, we review proposals based on quasi-Newton updates, and either satisfying the secant equation or a secant-like equation at some of the previous iterates. Conditions are given proving that, in some sense, the proposed preconditioners also approximate the inverse of the Hessian matrix. In particular, the structure of the preconditioners depends both on low-rank updates along with some specific parameters. The low-rank updates are obtained as by-product of NCG iterations. Moreover, we consider the possibility to embed damped techniques within a class of preconditioners based on quasi-Newton updates. Damped methods have proved to be effective to enhance the performance of quasi-Newton updates, in those cases where the Wolfe linesearch conditions are hardly fulfilled. The purpose is to extend the idea behind damped methods also to improve NCG schemes, following a novel line of research in the literature. The results, which summarize an extended numerical experience using large-scale CUTEst problems, is reported, showing that these approaches can considerably improve the performance of NCG methods
    corecore