15 research outputs found
Recommended from our members
Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
Given a twice-continuously differentiable vector-valued function r(x), a local minimizer of ∥r(x)∥2 is sought. We propose and analyse tensor-Newton methods, in which r(x) is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a first-order critical point of ∥r(x)∥2, and provide function evaluation bounds that agree with the best-known bounds for methods
using second derivatives. Numerical experiments comparing tensor-Newton methods with regularized Gauss-Newton and Newton methods demonstrate the practical performance of
the newly proposed method
A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function
This short note considers and resolves the apparent contradiction between known worst-case complexity results for first- and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre [On Nesterov's smooth Chebyshev-Rosenbrock function, Optim. Methods Softw. (2011)] implying a very large lower bound on the number of iterations required to reach the solution's neighbourhood for a specific problem with variable dimension.</p
Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis
On solving trust−region and other regularised subproblems in optimization
The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, Mor� and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problem
On solving trust−region and other regularised subproblems in optimization
The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, Mor� and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problem
Preconditioning saddle-point systems with applications in optimization
Saddle-point systems arise in many applications areas, in fact in any situation where an extremum principle arises with constraints. The Stokes problem describing slow viscous flow of an incompressible fluid is a classic example coming from PDEs and in the area of optimization such problems are ubiquitous. In this paper we present a framework into which many well-known methods for solving saddle-point systems fit. Based on this description we show how new approaches for the solution of saddle-point systems arising in optimization can be derived from the Bramble–Pasciak conjugate gradient approach widely used in PDEs and more recent generalizations thereof. In particular we derive a class of new solution methods based on the use of preconditioned conjugate gradients in nonstandard inner products and demonstrate how these can be understood through more standard machinery. We show connections to constraint preconditioning and give the results of numerical computations on a number of standard optimization test examples
Componentwise fast convergence in the solution of full-rank systems of nonlinear equations
Adaptive augmented Lagrangian methods: algorithms and practical numerical experience
In this paper, we consider augmented Lagrangian (AL) algorithms for solving
large-scale nonlinear optimization problems that execute adaptive strategies
for updating the penalty parameter. Our work is motivated by the recently
proposed adaptive AL trust region method by Curtis, Jiang, and Robinson [Math.
Prog., DOI: 10.1007/s10107-014-0784-y, 2013]. The first focal point of this
paper is a new variant of the approach that employs a line search rather than a
trust region strategy, where a critical algorithmic feature for the line search
strategy is the use of convexified piecewise quadratic models of the AL
function for computing the search directions. We prove global convergence
guarantees for our line search algorithm that are on par with those for the
previously proposed trust region method. A second focal point of this paper is
the practical performance of the line search and trust region algorithm
variants in Matlab software, as well as that of an adaptive penalty parameter
updating strategy incorporated into the Lancelot software. We test these
methods on problems from the CUTEst and COPS collections, as well as on
challenging test problems related to optimal power flow. Our numerical
experience suggests that the adaptive algorithms outperform traditional AL
methods in terms of efficiency and reliability. As with traditional AL
algorithms, the adaptive methods are matrix-free and thus represent a viable
option for solving extreme-scale problems
