2,078 research outputs found
A code for square permutations and convex permutominoes
In this article we consider square permutations, a natural subclass of
permutations defined in terms of geometric conditions, that can also be
described in terms of pattern avoiding permutations, and convex permutoninoes,
a related subclass of polyominoes. While these two classes of objects arised
independently in various contexts, they play a natural role in the description
of certain random horizontally and vertically convex grid configurations.
We propose a common approach to the enumeration of these two classes of
objets that allows us to explain the known common form of their generating
functions, and to derive new refined formulas and linear time random generation
algorithms for these objects and the associated grid configurations.Comment: 18 pages, 10 figures. Revision according to referees' remark
On the Anonymization of Differentially Private Location Obfuscation
Obfuscation techniques in location-based services (LBSs) have been shown
useful to hide the concrete locations of service users, whereas they do not
necessarily provide the anonymity. We quantify the anonymity of the location
data obfuscated by the planar Laplacian mechanism and that by the optimal
geo-indistinguishable mechanism of Bordenabe et al. We empirically show that
the latter provides stronger anonymity than the former in the sense that more
users in the database satisfy k-anonymity. To formalize and analyze such
approximate anonymity we introduce the notion of asymptotic anonymity. Then we
show that the location data obfuscated by the optimal geo-indistinguishable
mechanism can be anonymized by removing a smaller number of users from the
database. Furthermore, we demonstrate that the optimal geo-indistinguishable
mechanism has better utility both for users and for data analysts.Comment: ISITA'18 conference pape
The importance of better models in stochastic optimization
Standard stochastic optimization methods are brittle, sensitive to stepsize
choices and other algorithmic parameters, and they exhibit instability outside
of well-behaved families of objectives. To address these challenges, we
investigate models for stochastic minimization and learning problems that
exhibit better robustness to problem families and algorithmic parameters. With
appropriately accurate models---which we call the aProx family---stochastic
methods can be made stable, provably convergent and asymptotically optimal;
even modeling that the objective is nonnegative is sufficient for this
stability. We extend these results beyond convexity to weakly convex
objectives, which include compositions of convex losses with smooth functions
common in modern machine learning applications. We highlight the importance of
robustness and accurate modeling with a careful experimental evaluation of
convergence time and algorithm sensitivity
Mean Estimation from Adaptive One-bit Measurements
We consider the problem of estimating the mean of a normal distribution under
the following constraint: the estimator can access only a single bit from each
sample from this distribution. We study the squared error risk in this
estimation as a function of the number of samples and one-bit measurements .
We consider an adaptive estimation setting where the single-bit sent at step
is a function of both the new sample and the previous acquired bits.
For this setting, we show that no estimator can attain asymptotic mean squared
error smaller than times the variance. In other words,
one-bit restriction increases the number of samples required for a prescribed
accuracy of estimation by a factor of at least compared to the
unrestricted case. In addition, we provide an explicit estimator that attains
this asymptotic error, showing that, rather surprisingly, only times
more samples are required in order to attain estimation performance equivalent
to the unrestricted case
Distributed Delayed Stochastic Optimization
We analyze the convergence of gradient-based optimization algorithms that
base their updates on delayed stochastic gradient information. The main
application of our results is to the development of gradient-based distributed
optimization algorithms where a master node performs parameter updates while
worker nodes compute stochastic gradients based on local information in
parallel, which may give rise to delays due to asynchrony. We take motivation
from statistical problems where the size of the data is so large that it cannot
fit on one computer; with the advent of huge datasets in biology, astronomy,
and the internet, such problems are now common. Our main contribution is to
show that for smooth stochastic problems, the delays are asymptotically
negligible and we can achieve order-optimal convergence results. In application
to distributed optimization, we develop procedures that overcome communication
bottlenecks and synchronization requirements. We show -node architectures
whose optimization error in stochastic problems---in spite of asynchronous
delays---scales asymptotically as \order(1 / \sqrt{nT}) after iterations.
This rate is known to be optimal for a distributed system with nodes even
in the absence of delays. We additionally complement our theoretical results
with numerical experiments on a statistical machine learning task.Comment: 27 pages, 4 figure
- …
