116 research outputs found
A non-convex adaptive regularization approach to binary optimization
Binary optimization is a long-time problem ubiquitous in many engineering applications, e.g., automatic control, cyber-physical systems and machine learning. From a mathematical viewpoint, binary optimization is an NP-hard problem, to solve which one can find some suboptimal strategies in the literature. Among the most popular approaches, semidefinite relaxation has attracted much attention in the last years. In contrast, this work proposes and analyzes a non-convex regularization approach, through which we obtain a relaxed problem whose global minimum corresponds to the true binary solution of the original problem. Moreover, because the problem is non-convex, we propose an adaptive regularization that promotes the descent towards the global minimum. We provide both theoretical results that characterize the proposed model and numerical experiments that prove its effectiveness with respect to state-of-the-art methods
Fast sparse optimization via adaptive shrinkage
The need for fast sparse optimization is emerging, e.g., to deal with large-dimensional data-driven problems and to track time-varying systems. In the framework of linear sparse optimization, the iterative shrinkage-thresholding algorithm is a valuable method to solve Lasso, which is particularly appreciated for its ease of implementation. Nevertheless, it converges slowly. In this paper, we develop a proximal method, based on logarithmic regularization, which turns out to be an iterative shrinkage-thresholding algorithm with adaptive shrinkage hyperparameter. This adaptivity substantially enhances the trajectory of the algorithm, in a way that yields faster convergence, while keeping the simplicity of the original method. Our contribution is twofold: on the one hand, we derive and analyze the proposed algorithm; on the other hand, we validate its fast convergence via numerical experiments and we discuss the performance with respect to state-of-the-art algorithms
Sparse linear regression from perturbed data
The problem of sparse linear regression is relevant in the context of linear
system identification from large datasets. When data are collected from
real-world experiments, measurements are always affected by perturbations or
low-precision representations. However, the problem of sparse linear regression
from fully-perturbed data is scarcely studied in the literature, due to its
mathematical complexity. In this paper, we show that, by assuming bounded
perturbations, this problem can be tackled by solving low-complex l2 and l1
minimization problems. Both theoretical guarantees and numerical results are
illustrated in the paper
Enhancing low-rank solutions in semidefinite relaxations of Boolean quadratic problems
Boolean quadratic optimization problems occur in a number of applications.
Their mixed integer-continuous nature is challenging, since it is inherently
NP-hard. For this motivation, semidefinite programming relaxations (SDR's) are
proposed in the literature to approximate the solution, which recasts the
problem into convex optimization. Nevertheless, SDR's do not guarantee the
extraction of the correct binary minimizer. In this paper, we present a novel
approach to enhance the binary solution recovery. The key of the proposed
method is the exploitation of known information on the eigenvalues of the
desired solution. As the proposed approach yields a non-convex program, we
develop and analyze an iterative descent strategy, whose practical
effectiveness is shown via numerical results
Sparse learning with concave regularization: relaxation of the irrepresentable condition
Learning sparse models from data is an important task in all those frameworks where relevant information should be identified within a large dataset. This can be achieved by
formulating and solving suitable sparsity promoting optimization problems. As to linear regression models, Lasso is the most popular convex approach, based on an L1-norm regularization. In contrast, in this paper, we analyse a concave regularized approach, and we prove that it relaxes the irrepresentable condition, which is sufficient and essentially necessary for Lasso to select the right significant parameters. In practice, this has the benefit of reducing the number of necessary measurements with respect to Lasso. Since the proposed problem is nonconvex, we also discuss different algorithms to solve it, and we illustrate the obtained enhancement via numerical experiments
Lasso-based state estimation for cyber-physical systems under sensor attacks
The development of algorithms for secure state estimation in vulnerable cyber-physical systems has been gaining attention in the last years. A consolidated assumption is that an adversary can tamper a relatively small number of sensors. In the literature, block-sparsity methods exploit this prior information to recover the attack locations and the state of the system. In this paper, we propose an alternative, Lasso-based approach and we analyse its effectiveness. In particular, we theoretically derive conditions that guarantee successful attack/state recovery, independently of established time sparsity patterns. Furthermore, we develop a sparse state observer, by starting from the iterative soft thresholding algorithm for Lasso, to perform online estimation. Through several numerical experiments, we compare the proposed methods to the state-of-the-art algorithms
Fixed-order FIR approximation of linear systems from quantized input and output data
Abstract The problem of identifying a fixed-order FIR approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization, is dealt with in this paper. A fixed-order FIR model providing the best approximation of the input-output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. The considered problem is firstly formulated in terms of robust optimization. Then, two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques are presented. The effectiveness of the proposed approach is illustrated by means of a simulation example
Minimal LPV state-space realization driven set-membership identification
Abstract-Set-membership identification algorithms have been recently proposed to derive linear parameter-varying (LPV) models in input-output form, under the assumption that both measurements of the output and the scheduling signals are affected by bounded noise. In order to use the identified models for controller synthesis, linear time-invariant (LTI) realization theory is usually applied to derive a statespace model whose matrices depend statically on the scheduling signals, as required by most of the LPV control synthesis techniques. Unfortunately, application of the LTI realization theory leads to an approximate state-space description of the original LPV input-output model. In order to limit the effect of the realization error, a new set-membership algorithm for identification of input/output LPV models is proposed in the paper. A suitable nonconvex optimization problem is formulated to select the model in the feasible set which minimizes a suitable measure of the state-space realization error. The solution of the identification problem is then derived by means of convex relaxation techniques
A feedback control approach to convex optimization with inequality constraints
We propose a novel continuous-time algorithm for inequality-constrained convex optimization inspired by proportional-integral control. Unlike the popular primal-dual gradient dynamics, our method includes a proportional term to control the primal variable through the Lagrange multipliers. This approach has both theoretical and practical advantages. On the one hand, it simplifies the proof of the exponential convergence in the case of smooth, strongly convex problems, with a more straightforward assessment of the convergence rate concerning prior literature. On the other hand, through several examples, we show that the proposed algorithm converges faster than primal-dual gradient dynamics. This paper aims to illustrate these points by thoroughly analyzing the algorithm convergence and discussing some numerical simulations.accepted for publication in the Proceedings of the 63rd IEEE Conference on Decision and Control, December 16-19, 2024, Milan (Italy
- …
