3,147 research outputs found
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed optimization scenario in which the
aggregate objective function to minimize is partitioned, big-data and possibly
non-convex. Specifically, we focus on a set-up in which the dimension of the
decision variable depends on the network size as well as the number of local
functions, but each local function handled by a node depends only on a (small)
portion of the entire optimization variable. This problem set-up has been shown
to appear in many interesting network application scenarios. As main paper
contribution, we develop a simple, primal distributed algorithm to solve the
optimization problem, based on a randomized descent approach, which works under
asynchronous gossip communication. We prove that the proposed asynchronous
algorithm is a proper, ad-hoc version of a coordinate descent method and thus
converges to a stationary point. To show the effectiveness of the proposed
algorithm, we also present numerical simulations on a non-convex quadratic
program, which confirm the theoretical results
Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition
In this paper we consider a novel partitioned framework for distributed
optimization in peer-to-peer networks. In several important applications the
agents of a network have to solve an optimization problem with two key
features: (i) the dimension of the decision variable depends on the network
size, and (ii) cost function and constraints have a sparsity structure related
to the communication graph. For this class of problems a straightforward
application of existing consensus methods would show two inefficiencies: poor
scalability and redundancy of shared information. We propose an asynchronous
distributed algorithm, based on dual decomposition and coordinate methods, to
solve partitioned optimization problems. We show that, by exploiting the
problem structure, the solution can be partitioned among the nodes, so that
each node just stores a local copy of a portion of the decision variable
(rather than a copy of the entire decision vector) and solves a small-scale
local problem
A Primal Decomposition Method with Suboptimality Bounds for Distributed Mixed-Integer Linear Programming
In this paper we deal with a network of agents seeking to solve in a
distributed way Mixed-Integer Linear Programs (MILPs) with a coupling
constraint (modeling a limited shared resource) and local constraints. MILPs
are NP-hard problems and several challenges arise in a distributed framework,
so that looking for suboptimal solutions is of interest. To achieve this goal,
the presence of a linear coupling calls for tailored decomposition approaches.
We propose a fully distributed algorithm based on a primal decomposition
approach and a suitable tightening of the coupling constraints. Agents
repeatedly update local allocation vectors, which converge to an optimal
resource allocation of an approximate version of the original problem. Based on
such allocation vectors, agents are able to (locally) compute a mixed-integer
solution, which is guaranteed to be feasible after a sufficiently large time.
Asymptotic and finite-time suboptimality bounds are established for the
computed solution. Numerical simulations highlight the efficacy of the proposed
methodology.Comment: 57th IEEE Conference on Decision and Contro
A Duality-Based Approach for Distributed Optimization with Coupling Constraints
In this paper we consider a distributed optimization scenario in which a set
of agents has to solve a convex optimization problem with separable cost
function, local constraint sets and a coupling inequality constraint. We
propose a novel distributed algorithm based on a relaxation of the primal
problem and an elegant exploration of duality theory. Despite its complex
derivation based on several duality steps, the distributed algorithm has a very
simple and intuitive structure. That is, each node solves a local version of
the original problem relaxation, and updates suitable dual variables. We prove
the algorithm correctness and show its effectiveness via numerical
computations
A duality-based approach for distributed min-max optimization with application to demand side management
In this paper we consider a distributed optimization scenario in which a set
of processors aims at minimizing the maximum of a collection of "separable
convex functions" subject to local constraints. This set-up is motivated by
peak-demand minimization problems in smart grids. Here, the goal is to minimize
the peak value over a finite horizon with: (i) the demand at each time instant
being the sum of contributions from different devices, and (ii) the local
states at different time instants being coupled through local dynamics. The
min-max structure and the double coupling (through the devices and over the
time horizon) makes this problem challenging in a distributed set-up (e.g.,
well-known distributed dual decomposition approaches cannot be applied). We
propose a distributed algorithm based on the combination of duality methods and
properties from min-max optimization. Specifically, we derive a series of
equivalent problems by introducing ad-hoc slack variables and by going back and
forth from primal and dual formulations. On the resulting problem we apply a
dual subgradient method, which turns out to be a distributed algorithm. We
prove the correctness of the proposed algorithm and show its effectiveness via
numerical computations.Comment: arXiv admin note: substantial text overlap with arXiv:1611.0916
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
Discrete Abelian Gauge Theories for Quantum Simulations of QED
We study a lattice gauge theory in Wilson's Hamiltonian formalism. In view of
the realization of a quantum simulator for QED in one dimension, we introduce
an Abelian model with a discrete gauge symmetry , approximating
the theory for large . We analyze the role of the finiteness of the
gauge fields and the properties of physical states, that satisfy a generalized
Gauss's law. We finally discuss a possible implementation strategy, that
involves an effective dynamics in physical space.Comment: 13 pages, 3 figure
Distributed Big-Data Optimization via Block Communications
We study distributed multi-agent large-scale optimization problems, wherein
the cost function is composed of a smooth possibly nonconvex sum-utility plus a
DC (Difference-of-Convex) regularizer. We consider the scenario where the
dimension of the optimization variables is so large that optimizing and/or
transmitting the entire set of variables could cause unaffordable computation
and communication overhead. To address this issue, we propose the first
distributed algorithm whereby agents optimize and communicate only a portion of
their local variables. The scheme hinges on successive convex approximation
(SCA) to handle the nonconvexity of the objective function, coupled with a
novel block-signal tracking scheme, aiming at locally estimating the average of
the agents' gradients. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Numerical results on a sparse regression
problem show the effectiveness of the proposed algorithm and the impact of the
block size on its practical convergence speed and communication cost
Final-State Constrained Optimal Control via a Projection Operator Approach
In this paper we develop a numerical method to solve nonlinear optimal
control problems with final-state constraints. Specifically, we extend the
PRojection Operator based Netwon's method for Trajectory Optimization (PRONTO),
which was proposed by Hauser for unconstrained optimal control problems. While
in the standard method final-state constraints can be only approximately
handled by means of a terminal penalty, in this work we propose a methodology
to meet the constraints exactly. Moreover, our method guarantees recursive
feasibility of the final-state constraint. This is an appealing property
especially in realtime applications in which one would like to be able to stop
the computation even if the desired tolerance has not been reached, but still
satisfy the constraints. Following the same conceptual idea of PRONTO, the
proposed strategy is based on two main steps which (differently from the
standard scheme) preserve the feasibility of the final-state constraints: (i)
solve a quadratic approximation of the nonlinear problem to find a descent
direction, and (ii) get a (feasible) trajectory by means of a feedback law
(which turns out to be a nonlinear projection operator). To find the (feasible)
descent direction we take advantage of final-state constrained Linear Quadratic
optimal control methods, while the second step is performed by suitably
designing a constrained version of the trajectory tracking projection operator.
The effectiveness of the proposed strategy is tested on the optimal state
transfer of an inverted pendulum
- …
