6,228 research outputs found
A Linearly Convergent Majorized ADMM with Indefinite Proximal Terms for Convex Composite Programming and Its Applications
This paper aims to study a majorized alternating direction method of
multipliers with indefinite proximal terms (iPADMM) for convex composite
optimization problems. We show that the majorized iPADMM for 2-block convex
optimization problems converges globally under weaker conditions than those
used in the literature and exhibits a linear convergence rate under a local
error bound condition. Based on these, we establish the linear rate convergence
results for a symmetric Gaussian-Seidel based majorized iPADMM, which is
designed for multi-block composite convex optimization problems. Moreover, we
apply the majorized iPADMM to solve different types of regularized logistic
regression problems. The numerical results on both synthetic and real datasets
demonstrate the efficiency of the majorized iPADMM and also illustrate the
effectiveness of the introduced indefinite proximal terms
Regrets of an Online Alternating Direction Method of Multipliers for Online Composite Optimization
In this paper, we investigate regrets of an online semi-proximal alternating
direction method of multiplier (Online-spADMM) for solving online linearly
constrained convex composite optimization problems. Under mild conditions, we
establish objective regret and
constraint violation regret at round when the dual step-length is taken in
and penalty parameter is taken as . We
explain that the optimal value of parameter is of order . Like the semi-proximal alternating direction method of
multiplier (spADMM), Online-spADMM has the advantage to resolve the potentially
non-solvability issue of the subproblems efficiently. We show the usefulness of
the obtained results when applied to online quadratic optimization problem. The
inequalities established for Online-spADMM are also used to develop iteration
complexity of the average update of spADMM for solving linearly constrained
convex composite optimization problems
An effect of large permanent charge: Decreasing flux to zero with increasing transmembrane potential to infinity
In this work, we examine effects of large permanent charges on ionic flow
through ion channels based on a quasi-one dimensional Poisson-Nernst-Planck
model. It turns out large positive permanent charges inhibit the flux of cation
as expected, but strikingly, as the transmembrane electrochemical potential for
anion increases in a particular way, the flux of anion decreases. The latter
phenomenon was observed experimentally but the cause seemed to be unclear. The
mechanisms for these phenomena are examined with the help of the profiles of
the ionic concentrations, electric fields and electrochemical potentials. The
underlying reasons for the near zero flux of cation and for the decreasing flux
of anion are shown to be different over different regions of the permanent
charge. Our model is oversimplified. More structural detail and more
correlations between ions can and should be included. But the basic finding
seems striking and important and deserving of further investigation.Comment: 27 pages, 13 figure
Smoothing SQP methods for solving degenerate nonsmooth constrained optimization problems with applications to bilevel programs
We consider a degenerate nonsmooth and nonconvex optimization problem for
which the standard constraint qualification such as the generalized Mangasarian
Fromovitz constraint qualification (GMFCQ) may not hold. We use smoothing
functions with the gradient consistency property to approximate the nonsmooth
functions and introduce a smoothing sequential quadratic programming (SQP)
algorithm under the exact penalty framework. We show that any accumulation
point of a selected subsequence of the iteration sequence generated by the
smoothing SQP algorithm is a Clarke stationary point, provided that the
sequence of multipliers and the sequence of exact penalty parameters are
bounded. Furthermore, we propose a new condition called the weakly generalized
Mangasarian Fromovitz constraint qualification (WGMFCQ) that is weaker than the
GMFCQ. We show that the extended version of the WGMFCQ guarantees the
boundedness of the sequence of multipliers and the sequence of exact penalty
parameters and thus guarantees the global convergence of the smoothing SQP
algorithm. We demonstrate that the WGMFCQ can be satisfied by bilevel programs
for which the GMFCQ never holds. Preliminary numerical experiments show that
the algorithm is efficient for solving degenerate nonsmooth optimization
problem such as the simple bilevel program
Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Quadratic and Semi-Definite Programming
In this paper, we aim to provide a comprehensive analysis on the linear rate
convergence of the alternating direction method of multipliers (ADMM) for
solving linearly constrained convex composite optimization problems. Under a
certain error bound condition, we establish the global linear rate of
convergence for a more general semi-proximal ADMM with the dual steplength
being restricted to be in the open interval . In our
analysis, we assume neither the strong convexity nor the strict complementarity
except an error bound condition, which holds automatically for convex composite
quadratic programming. This semi-proximal ADMM, which includes the classic
ADMM, not only has the advantage to resolve the potentially non-solvability
issue of the subproblems in the classic ADMM but also possesses the abilities
of handling multi-block convex optimization problems efficiently. We shall use
convex composite quadratic programming and quadratic semi-definite programming
as important applications to demonstrate the significance of the obtained
results. Of its own novelty in second-order variational analysis, a complete
characterization is provided on the isolated calmness for the nonlinear convex
semi-definite optimization problem in terms of its second order sufficient
optimality condition and the strict Robinson constraint qualification for the
purpose of proving the linear rate convergence of the semi-proximal ADMM when
applied to two- and multi-block convex quadratic semi-definite programming
Let the Cloud Watch Over Your IoT File Systems
Smart devices produce security-sensitive data and keep them in on-device
storage for persistence. The current storage stack on smart devices, however,
offers weak security guarantees: not only because the stack depends on a
vulnerable commodity OS, but also because smart device deployment is known weak
on security measures.
To safeguard such data on smart devices, we present a novel storage stack
architecture that i) protects file data in a trusted execution environment
(TEE); ii) outsources file system logic and metadata out of TEE; iii) running a
metadata-only file system replica in the cloud for continuously verifying the
on-device file system behaviors. To realize the architecture, we build
Overwatch, aTrustZone-based storage stack. Overwatch addresses unique
challenges including discerning metadata at fine grains, hiding network delays,
and coping with cloud disconnection. On a suite of three real-world
applications, Overwatch shows moderate security overheads
A Superconvergent Ensemble HDG Method for Parameterized Convection Diffusion Equations
In this paper, we first devise an ensemble hybridizable discontinuous
Galerkin (HDG) method to efficiently simulate a group of parameterized
convection diffusion PDEs. These PDEs have different coefficients, initial
conditions, source terms and boundary conditions. The ensemble HDG discrete
system shares a common coefficient matrix with multiple right hand side (RHS)
vectors; it reduces both computational cost and storage. We have two
contributions in this paper. First, we derive an optimal convergence rate
for the ensemble solutions on a general polygonal domain, which is the first
such result in the literature. Second, we obtain a superconvergent rate for the
ensemble solutions after an element-by-element postprocessing under some
assumptions on the domain and the coefficients of the PDEs. We present
numerical experiments to confirm our theoretical results
A Regularized Semi-Smooth Newton Method With Projection Steps for Composite Convex Programs
The goal of this paper is to study approaches to bridge the gap between
first-order and second-order type methods for composite convex programs. Our
key observations are: i) Many well-known operator splitting methods, such as
forward-backward splitting (FBS) and Douglas-Rachford splitting (DRS), actually
define a fixed-point mapping; ii) The optimal solutions of the composite convex
program and the solutions of a system of nonlinear equations derived from the
fixed-point mapping are equivalent. Solving this kind of system of nonlinear
equations enables us to develop second-order type methods. Although these
nonlinear equations may be non-differentiable, they are often semi-smooth and
their generalized Jacobian matrix is positive semidefinite due to monotonicity.
By combining with a regularization approach and a known hyperplane projection
technique, we propose an adaptive semi-smooth Newton method and establish its
convergence to global optimality. Preliminary numerical results on
-minimization problems demonstrate that our second-order type
algorithms are able to achieve superlinear or quadratic convergence.Comment: 25 pages, 4 figure
Energy-efficient population coding constrains network size of a neuronal array system
Here, we consider the open issue of how the energy efficiency of neural
information transmission process in a general neuronal array constrains the
network size, and how well this network size ensures the neural information
being transmitted reliably in a noisy environment. By direct mathematical
analysis, we have obtained general solutions proving that there exists an
optimal neuronal number in the network with which the average coding energy
cost (defined as energy consumption divided by mutual information) per neuron
passes through a global minimum for both subthreshold and superthreshold
signals. Varying with increases in background noise intensity, the optimal
neuronal number decreases for subthreshold and increases for suprathreshold
signals. The existence of an optimal neuronal number in an array network
reveals a general rule for population coding stating that the neuronal number
should be large enough to ensure reliable information transmission robust to
the noisy environment but small enough to minimize energy cost.Comment: 21 pages, 4 figure
A conjugate gradient method for electronic structure calculations
In this paper, we study a conjugate gradient method for electronic structure
calculations. We propose a Hessian based step size strategy, which together
with three orthogonality approaches yields three algorithms for computing the
ground state energy of atomic and molecular systems. Under some mild
assumptions, we prove that our algorithms converge locally. It is shown by our
numerical experiments that the conjugate gradient method is efficient
- …
