3,059 research outputs found
The Nesterov-Todd Direction and its Relation to Weighted Analytic Centers
The subject of this report concerns differential-geometric properties of the Nesterov-Todd search direction for linear optimization over symmetric cones. In particular, we investigate the rescaled asymptotics of the associated flow near the central path. Our results imply that the Nesterov-Todd direction arises as the solution of a Newton system defined in terms of a certain transformation of the primal-dual feasible domain. This transformation has especially appealing properties which generalize the notion of weighted analytic centers for linear programming
Self-scaled barrier functions on symmetric cones and their classification
Self-scaled barrier functions on self-scaled cones were introduced through a
set of axioms in 1994 by Y.E. Nesterov and M.J. Todd as a tool for the
construction of long-step interior point algorithms. This paper provides firm
foundation for these objects by exhibiting their symmetry properties, their
intimate ties with the symmetry groups of their domains of definition, and
subsequently their decomposition into irreducible parts and algebraic
classification theory. In a first part we recall the characterisation of the
family of self-scaled cones as the set of symmetric cones and develop a
primal-dual symmetric viewpoint on self-scaled barriers, results that were
first discovered by the second author. We then show in a short, simple proof
that any pointed, convex cone decomposes into a direct sum of irreducible
components in a unique way, a result which can also be of independent interest.
We then show that any self-scaled barrier function decomposes in an essentially
unique way into a direct sum of self-scaled barriers defined on the irreducible
components of the underlying symmetric cone. Finally, we present a complete
algebraic classification of self-scaled barrier functions using the
correspondence between symmetric cones and Euclidean Jordan algebras.Comment: 17 page
A new perspective on the complexity of interior point methods for linear programming
In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations
Optimal execution strategy with an uncertain volume target
In the seminal paper on optimal execution of portfolio transactions, Almgren
and Chriss (2001) define the optimal trading strategy to liquidate a fixed
volume of a single security under price uncertainty. Yet there exist
situations, such as in the power market, in which the volume to be traded can
only be estimated and becomes more accurate when approaching a specified
delivery time. During the course of execution, a trader should then constantly
adapt their trading strategy to meet their fluctuating volume target. In this
paper, we develop a model that accounts for volume uncertainty and we show that
a risk-averse trader has benefit in delaying their trades. More precisely, we
argue that the optimal strategy is a trade-off between early and late trades in
order to balance risk associated with both price and volume. By incorporating a
risk term related to the volume to trade, the static optimal strategies
suggested by our model avoid the explosion in the algorithmic complexity
usually associated with dynamic programming solutions, all the while yielding
competitive performance
Algebraic Tail Decay of Condition Numbers for Random Conic Systems under a General Family of Input Distributions
We consider the conic feasibility problem associated with linear homogeneous systems of inequalities. The complexity of iterative algorithms for solving this problem depends on a condition number. When studying the typical behaviour of algorithms under stochastic input one is therefore naturally led to investigate the fatness of the distribution tails of the random condition number that ensues. We study an unprecedently general class of probability models for the random input matrix and show that the tails decay at algebraic rates with an exponent that naturally emerges when applying a theory of uniform absolute continuity which is also developed in this paper.\ud
\ud
Raphael Hauser was supported through grant NAL/00720/G from the Nuffield Foundation and through grant GR/M30975 from the Engineering and Physical Sciences Research Council of the UK. Tobias Müller was partially supported by EPSRC, the Department of Statistics, Bekker-la-Bastide fonds, Dr Hendrik Muller's Vaderlandsch fonds, and Prins Bernhard Cultuurfonds
Relative Robust Portfolio Optimization
Considering mean-variance portfolio problems with uncertain model parameters, we contrast the classical absolute robust optimization approach with the relative robust approach based on a maximum regret function. Although the latter problems are NP-hard in general, we show that tractable inner and outer approximations exist in several cases that are of central interest in asset management
Low-Rank Boolean Matrix Approximation by Integer Programming
Low-rank approximations of data matrices are an important dimensionality
reduction tool in machine learning and regression analysis. We consider the
case of categorical variables, where it can be formulated as the problem of
finding low-rank approximations to Boolean matrices. In this paper we give what
is to the best of our knowledge the first integer programming formulation that
relies on only polynomially many variables and constraints, we discuss how to
solve it computationally and report numerical tests on synthetic and real-world
data
- …
