898 research outputs found
Kissing numbers and transference theorems from generalized tail bounds
We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a
lattice to a wide class of test functions. From this we obtain quite general
transference bounds, as well as bounds on the number of lattice points
contained in certain bodies. As applications, we bound the lattice kissing
number in norms by for , and also give
a proof of a new transference bound in the norm.Comment: Previous title: "Generalizations of Banaszczyk's transference
theorems and tail bound
Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One
We show the first dimension-preserving search-to-decision reductions for
approximate SVP and CVP. In particular, for any ,
we obtain an efficient dimension-preserving reduction from -SVP to -GapSVP and an efficient dimension-preserving reduction
from -CVP to -GapCVP. These results generalize the known
equivalences of the search and decision versions of these problems in the exact
case when . For SVP, we actually obtain something slightly stronger
than a search-to-decision reduction---we reduce -SVP to
-unique SVP, a potentially easier problem than -GapSVP.Comment: Updated to acknowledge additional prior wor
On the Quantitative Hardness of CVP
For odd
integers (and ), we show that the Closest Vector Problem
in the norm (\CVP_p) over rank lattices cannot be solved in
2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential
Time Hypothesis (SETH) fails. We then extend this result to "almost all" values
of , not including the even integers. This comes tantalizingly close
to settling the quantitative time complexity of the important special case of
\CVP_2 (i.e., \CVP in the Euclidean norm), for which a -time
algorithm is known. In particular, our result applies for any
that approaches as .
We also show a similar SETH-hardness result for \SVP_\infty; hardness of
approximating \CVP_p to within some constant factor under the so-called
Gap-ETH assumption; and other quantitative hardness results for \CVP_p and
\CVPP_p for any under different assumptions
On the Closest Vector Problem with a Distance Guarantee
We present a substantially more efficient variant, both in terms of running
time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky,
and Micciancio for solving CVPP (the preprocessing version of the Closest
Vector Problem, CVP) with a distance guarantee. For instance, for any , our algorithm finds the (unique) closest lattice point for any target
point whose distance from the lattice is at most times the length of
the shortest nonzero lattice vector, requires as preprocessing advice only vectors, and runs in
time .
As our second main contribution, we present reductions showing that it
suffices to solve CVP, both in its plain and preprocessing versions, when the
input target point is within some bounded distance of the lattice. The
reductions are based on ideas due to Kannan and a recent sparsification
technique due to Dadush and Kun. Combining our reductions with the LLM
algorithm gives an approximation factor of for search
CVPP, improving on the previous best of due to Lagarias, Lenstra,
and Schnorr. When combined with our improved algorithm we obtain, somewhat
surprisingly, that only O(n) vectors of preprocessing advice are sufficient to
solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance
Decoding and the Closest Vector Problem with Preprocessing". Conference on
Computational Complexity (2014
On the Lattice Distortion Problem
We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks
how "similar" two lattices are. I.e., what is the minimal distortion of a
linear bijection between the two lattices? LDP generalizes the Lattice
Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply
asks whether the minimal distortion is one.
As our first contribution, we show that the distortion between any two
lattices is approximated up to a factor by a simple function of
their successive minima. Our methods are constructive, allowing us to compute
low-distortion mappings that are within a factor
of optimal in polynomial time and within a factor of optimal in
singly exponential time. Our algorithms rely on a notion of basis reduction
introduced by Seysen (Combinatorica 1993), which we show is intimately related
to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to
within any constant factor (under randomized reductions), by a reduction from
the Shortest Vector Problem.Comment: This is the full version of a paper that appeared in ESA 201
Solving the Closest Vector Problem in Time--- The Discrete Gaussian Strikes Again!
We give a -time and space randomized algorithm for solving the
exact Closest Vector Problem (CVP) on -dimensional Euclidean lattices. This
improves on the previous fastest algorithm, the deterministic
-time and -space algorithm of
Micciancio and Voulgaris.
We achieve our main result in three steps. First, we show how to modify the
sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian
sampling over lattice shifts, , with very low parameters. While the
actual algorithm is a natural generalization of [ADRS15], the analysis uses
substantial new ideas. This yields a -time algorithm for
approximate CVP for any approximation factor .
Second, we show that the approximate closest vectors to a target vector can
be grouped into "lower-dimensional clusters," and we use this to obtain a
recursive reduction from exact CVP to a variant of approximate CVP that
"behaves well with these clusters." Third, we show that our discrete Gaussian
sampling algorithm can be used to solve this variant of approximate CVP.
The analysis depends crucially on some new properties of the discrete
Gaussian distribution and approximate closest vectors, which might be of
independent interest
The ontogeny of sexual size dimorphism of a moth: when do males and females grow apart?
Sexual dimorphism in body size (sexual size dimorphism) is common in many species. The sources of selection that generate the independent evolution of adult male and female size have been investigated extensively by evolutionary biologists, but how and when females and males grow apart during ontogeny is poorly understood. Here we use the hawkmoth, Manduca sexta, to examine when sexual size dimorphism arises by measuring body mass every day during development. We further investigated whether environmental variables influence the ontogeny of sexual size dimorphism by raising moths on three different diet qualities (poor, medium and high). We found that size dimorphism arose during early larval development on the highest quality food treatment but it arose late in larval development when raised on the medium quality food. This female-biased dimorphism (females larger) increased substantially from the pupal-to-adult stage in both treatments, a pattern that appears to be common in Lepidopterans. Although dimorphism appeared in a few stages when individuals were raised on the poorest quality diet, it did not persist such that male and female adults were the same size. This demonstrates that the environmental conditions that insects are raised in can affect the growth trajectories of males and females differently and thus when dimorphism arises or disappears during development. We conclude that the development of sexual size dimorphism in M. sexta occurs during larval development and continues to accumulate during the pupal/adult stages, and that environmental variables such as diet quality can influence patterns of dimorphism in adults
Just Take the Average! An Embarrassingly Simple 2^n-Time Algorithm for SVP (and CVP)
We show a 2^{n+o(n)}-time (and space) algorithm for the Shortest Vector Problem on lattices (SVP) that works by repeatedly running an embarrassingly simple "pair and average" sieving-like procedure on a list of lattice vectors. This matches the running time (and space) of the current fastest known algorithm, due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS, in STOC, 2015), with a far simpler algorithm. Our algorithm is in fact a modification of the ADRS algorithm, with a certain careful rejection sampling step removed.
The correctness of our algorithm follows from a more general "meta-theorem," showing that such rejection sampling steps are unnecessary for a certain class of algorithms and use cases. In particular, this also applies to the related 2^{n + o(n)}-time algorithm for the Closest Vector Problem (CVP), due to Aggarwal, Dadush, and Stephens-Davidowitz (ADS, in FOCS, 2015), yielding a similar embarrassingly simple algorithm for gamma-approximate CVP for any gamma = 1+2^{-o(n/log n)}. (We can also remove the rejection sampling procedure from the 2^{n+o(n)}-time ADS algorithm for exact CVP, but the resulting algorithm is still quite complicated.
- …
