663 research outputs found
A Lower Bound for the Discrepancy of a Random Point Set
We show that there is a constant such that for all , , the point set consisting of points chosen uniformly at random in
the -dimensional unit cube with probability at least
admits an axis parallel rectangle containing points more than expected. Consequently, the
expected star discrepancy of a random point set is of order .Comment: 7 page
Improved Approximation Algorithms for the Min-Max Selecting Items Problem
We give a simple deterministic approximation
algorithm for the Min-Max Selecting Items problem, where is the number of
scenarios. While our main goal is simplicity, this result also improves over
the previous best approximation ratio of due to Kasperski, Kurpisz,
and Zieli\'nski (Information Processing Letters (2013)). Despite using the
method of pessimistic estimators, the algorithm has a polynomial runtime also
in the RAM model of computation. We also show that the LP formulation for this
problem by Kasperski and Zieli\'nski (Annals of Operations Research (2009)),
which is the basis for the previous work and ours, has an integrality gap of at
least
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem
The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and
Winkler (ITCS 2014), is to collaboratively leak a piece of information known to
only one member of a group (i)~without revealing who was the origin of this
information and (ii)~without any private communication, neither during the
process nor before. Despite several deep structural results, even the smallest
case of leaking one bit of information present at one of two players is not
well understood. Brody et al.\ gave a 2-round protocol enabling the two players
to succeed with probability and showed the hardness result that no
protocol can give a success probability of more than~.
In this work, we show that neither bound is tight. Our new hardness result,
obtained by a different application of the concavity method used also in the
previous work, states that a success probability better than 0.3672 is not
possible. Using both theoretical and numerical approaches, we improve the lower
bound to , that is, give a protocol leading to this success
probability. To ease the design of new protocols, we prove an equivalent
formulation of the cryptogenography problem as solitaire vector splitting game.
Via an automated game tree search, we find good strategies for this game. We
then translate the splits that occurred in this strategy into inequalities
relating position values and use an LP solver to find an optimal solution for
these inequalities. This gives slightly better game values, but more
importantly, it gives a more compact representation of the protocol and a way
to easily verify the claimed quality of the protocol.
These improved bounds, as well as the large sizes and depths of the improved
protocols we find, suggests that finding good protocols for the
cryptogenography problem as well as understanding their structure are harder
than what the simple problem formulation suggests
Quasi-Random Rumor Spreading: Reducing Randomness Can Be Costly
We give a time-randomness tradeoff for the quasi-random rumor spreading
protocol proposed by Doerr, Friedrich and Sauerwald [SODA 2008] on complete
graphs. In this protocol, the goal is to spread a piece of information
originating from one vertex throughout the network. Each vertex is assumed to
have a (cyclic) list of its neighbors. Once a vertex is informed by one of its
neighbors, it chooses a position in its list uniformly at random and then
informs its neighbors starting from that position and proceeding in order of
the list. Angelopoulos, Doerr, Huber and Panagiotou [Electron.~J.~Combin.~2009]
showed that after rounds, the rumor will have been
broadcasted to all nodes with probability .
We study the broadcast time when the amount of randomness available at each
node is reduced in natural way. In particular, we prove that if each node can
only make its initial random selection from every -th node on its list,
then there exists lists such that steps are needed to inform every vertex with
probability at least . This shows that a further reduction of the amount of
randomness used in a simple quasi-random protocol comes at a loss of
efficiency
Simple and Optimal Randomized Fault-Tolerant Rumor Spreading
We revisit the classic problem of spreading a piece of information in a group
of fully connected processors. By suitably adding a small dose of
randomness to the protocol of Gasienic and Pelc (1996), we derive for the first
time protocols that (i) use a linear number of messages, (ii) are correct even
when an arbitrary number of adversarially chosen processors does not
participate in the process, and (iii) with high probability have the
asymptotically optimal runtime of when at least an arbitrarily
small constant fraction of the processors are working. In addition, our
protocols do not require that the system is synchronized nor that all
processors are simultaneously woken up at time zero, they are fully based on
push-operations, and they do not need an a priori estimate on the number of
failed nodes.
Our protocols thus overcome the typical disadvantages of the two known
approaches, algorithms based on random gossip (typically needing a large number
of messages due to their unorganized nature) and algorithms based on fair
workload splitting (which are either not {time-efficient} or require intricate
preprocessing steps plus synchronization).Comment: This is the author-generated version of a paper which is to appear in
Distributed Computing, Springer, DOI: 10.1007/s00446-014-0238-z It is
available online from
http://link.springer.com/article/10.1007/s00446-014-0238-z This version
contains some new results (Section 6
An Exponential Lower Bound for the Runtime of the cGA on Jump Functions
In the first runtime analysis of an estimation-of-distribution algorithm
(EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO
2018) proved that the runtime of the compact genetic algorithm with suitable
parameter choice on jump functions with high probability is at most polynomial
(in the dimension) if the jump size is at most logarithmic (in the dimension),
and is at most exponential in the jump size if the jump size is
super-logarithmic. The exponential runtime guarantee was achieved with a
hypothetical population size that is also exponential in the jump size.
Consequently, this setting cannot lead to a better runtime.
In this work, we show that any choice of the hypothetical population size
leads to a runtime that, with high probability, is at least exponential in the
jump size. This result might be the first non-trivial exponential lower bound
for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text
overlap with arXiv:1903.1098
- …
