1,633 research outputs found
Allocation in Practice
How do we allocate scarcere sources? How do we fairly allocate costs? These
are two pressing challenges facing society today. I discuss two recent projects
at NICTA concerning resource and cost allocation. In the first, we have been
working with FoodBank Local, a social startup working in collaboration with
food bank charities around the world to optimise the logistics of collecting
and distributing donated food. Before we can distribute this food, we must
decide how to allocate it to different charities and food kitchens. This gives
rise to a fair division problem with several new dimensions, rarely considered
in the literature. In the second, we have been looking at cost allocation
within the distribution network of a large multinational company. This also has
several new dimensions rarely considered in the literature.Comment: To appear in Proc. of 37th edition of the German Conference on
Artificial Intelligence (KI 2014), Springer LNC
An Empirical Analysis of Search in GSAT
We describe an extensive study of search in GSAT, an approximation procedure
for propositional satisfiability. GSAT performs greedy hill-climbing on the
number of satisfied clauses in a truth assignment. Our experiments provide a
more complete picture of GSAT's search than previous accounts. We describe in
detail the two phases of search: rapid hill-climbing followed by a long plateau
search. We demonstrate that when applied to randomly generated 3SAT problems,
there is a very simple scaling with problem size for both the mean number of
satisfied clauses and the mean branching rate. Our results allow us to make
detailed numerical conjectures about the length of the hill-climbing phase, the
average gradient of this phase, and to conjecture that both the average score
and average branching rate decay exponentially during plateau search. We end by
showing how these results can be used to direct future theoretical analysis.
This work provides a case study of how computer experiments can be used to
improve understanding of the theoretical properties of algorithms.Comment: See http://www.jair.org/ for any accompanying file
Trying again to fail-first
For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the “FF” series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy
On The Complexity and Completeness of Static Constraints for Breaking Row and Column Symmetry
We consider a common type of symmetry where we have a matrix of decision
variables with interchangeable rows and columns. A simple and efficient method
to deal with such row and column symmetry is to post symmetry breaking
constraints like DOUBLELEX and SNAKELEX. We provide a number of positive and
negative results on posting such symmetry breaking constraints. On the positive
side, we prove that we can compute in polynomial time a unique representative
of an equivalence class in a matrix model with row and column symmetry if the
number of rows (or of columns) is bounded and in a number of other special
cases. On the negative side, we show that whilst DOUBLELEX and SNAKELEX are
often effective in practice, they can leave a large number of symmetric
solutions in the worst case. In addition, we prove that propagating DOUBLELEX
completely is NP-hard. Finally we consider how to break row, column and value
symmetry, correcting a result in the literature about the safeness of combining
different symmetry breaking constraints. We end with the first experimental
study on how much symmetry is left by DOUBLELEX and SNAKELEX on some benchmark
problems.Comment: To appear in the Proceedings of the 16th International Conference on
Principles and Practice of Constraint Programming (CP 2010
Scalable Parallel Numerical Constraint Solver Using Global Load Balancing
We present a scalable parallel solver for numerical constraint satisfaction
problems (NCSPs). Our parallelization scheme consists of homogeneous worker
solvers, each of which runs on an available core and communicates with others
via the global load balancing (GLB) method. The parallel solver is implemented
with X10 that provides an implementation of GLB as a library. In experiments,
several NCSPs from the literature were solved and attained up to 516-fold
speedup using 600 cores of the TSUBAME2.5 supercomputer.Comment: To be presented at X10'15 Worksho
Pathway choice in DNA double strand break repair: Observations of a balancing act
Proper repair of DNA double strand breaks (DSBs) is vital for the preservation of genomic integrity. There are two main pathways that repair DSBs, Homologous recombination (HR) and Non-homologous end-joining (NHEJ). HR is restricted to the S and G2 phases of the cell cycle due to the requirement for the sister chromatid as a template, while NHEJ is active throughout the cell cycle and does not rely on a template. The balance between both pathways is essential for genome stability and numerous assays have been developed to measure the efficiency of the two pathways. Several proteins are known to affect the balance between HR and NHEJ and the complexity of the break also plays a role. In this review we describe several repair assays to determine the efficiencies of both pathways. We discuss how disturbance of the balance between HR and NHEJ can lead to disease, but also how it can be exploited for cancer treatment
Random Costs in Combinatorial Optimization
The random cost problem is the problem of finding the minimum in an
exponentially long list of random numbers. By definition, this problem cannot
be solved faster than by exhaustive search. It is shown that a classical
NP-hard optimization problem, number partitioning, is essentially equivalent to
the random cost problem. This explains the bad performance of heuristic
approaches to the number partitioning problem and allows us to calculate the
probability distributions of the optimum and sub-optimum costs.Comment: 4 pages, Revtex, 2 figures (eps), submitted to PR
Phase Transition in the Number Partitioning Problem
Number partitioning is an NP-complete problem of combinatorial optimization.
A statistical mechanics analysis reveals the existence of a phase transition
that separates the easy from the hard to solve instances and that reflects the
pseudo-polynomiality of number partitioning. The phase diagram and the value of
the typical ground state energy are calculated.Comment: minor changes (references, typos and discussion of results
New scaling for the alpha effect in slowly rotating turbulence
Using simulations of slowly rotating stratified turbulence, we show that the
alpha effect responsible for the generation of astrophysical magnetic fields is
proportional to the logarithmic gradient of kinetic energy density rather than
that of momentum, as was previously thought. This result is in agreement with a
new analytic theory developed in this paper for large Reynolds numbers. Thus,
the contribution of density stratification is less important than that of
turbulent velocity. The alpha effect and other turbulent transport coefficients
are determined by means of the test-field method. In addition to forced
turbulence, we also investigate supernova-driven turbulence and stellar
convection. In some cases (intermediate rotation rate for forced turbulence,
convection with intermediate temperature stratification, and supernova-driven
turbulence) we find that the contribution of density stratification might be
even less important than suggested by the analytic theory.Comment: 10 pages, 9 figures, revised version, Astrophys. J., in pres
Recommended from our members
Theory Learning with Symmetry Breaking
This paper investigates the use of a Prolog coded SMT solver in tackling a well known constraints problem, namely packing a given set of consecutive squares into a given rectangle, and details the developments in the solver that this motivates. The packing problem has a natural model in the theory of quantifier-free integer difference logic, a theory supported by many SMT solvers. The solver used in this work exploits a data structure consisting of an incremental Floyd-Warshall matrix paired with a watch matrix that monitors the entailment status of integer difference constraints. It is shown how this structure can be used to build unsatisfiable theory cores on the fly, which in turn allows theory learning to be incorporated into the solver. Further, it is shown that a problem-specific and non-standard approach to learning can be taken where symmetry breaking is incorporated into the learning stage, magnifying the effect of learning. It is argued that the declarative framework allows the solver to be used in this white box manner and is a strength of the solver. The approach is experimentally evaluated
- …
