9 research outputs found
Gradient-Free Methods for Saddle-Point Problem
In the paper, we generalize the approach Gasnikov et. al, 2017, which allows
to solve (stochastic) convex optimization problems with an inexact
gradient-free oracle, to the convex-concave saddle-point problem. The proposed
approach works, at least, like the best existing approaches. But for a special
set-up (simplex type constraints and closeness of Lipschitz constants in 1 and
2 norms) our approach reduces times the required number of
oracle calls (function calculations). Our method uses a stochastic
approximation of the gradient via finite differences. In this case, the
function must be specified not only on the optimization set itself, but in a
certain neighbourhood of it. In the second part of the paper, we analyze the
case when such an assumption cannot be made, we propose a general approach on
how to modernize the method to solve this problem, and also we apply this
approach to particular cases of some classical sets
