1,052 research outputs found
Warmstarting of Model-based Algorithm Configuration
The performance of many hard combinatorial problem solvers depends strongly
on their parameter settings, and since manual parameter tuning is both tedious
and suboptimal the AI community has recently developed several algorithm
configuration (AC) methods to automatically address this problem. While all
existing AC methods start the configuration process of an algorithm A from
scratch for each new type of benchmark instances, here we propose to exploit
information about A's performance on previous benchmarks in order to warmstart
its configuration on new types of benchmarks. We introduce two complementary
ways in which we can exploit this information to warmstart AC methods based on
a predictive model. Experiments for optimizing a very flexible modern SAT
solver on twelve different instance sets show that our methods often yield
substantial speedups over existing AC methods (up to 165-fold) and can also
find substantially better configurations given the same compute budget.Comment: Preprint of AAAI'18 pape
Neural Networks for Predicting Algorithm Runtime Distributions
Many state-of-the-art algorithms for solving hard combinatorial problems in
artificial intelligence (AI) include elements of stochasticity that lead to
high variations in runtime, even for a fixed problem instance. Knowledge about
the resulting runtime distributions (RTDs) of algorithms on given problem
instances can be exploited in various meta-algorithmic procedures, such as
algorithm selection, portfolios, and randomized restarts. Previous work has
shown that machine learning can be used to individually predict mean, median
and variance of RTDs. To establish a new state-of-the-art in predicting RTDs,
we demonstrate that the parameters of an RTD should be learned jointly and that
neural networks can do this well by directly optimizing the likelihood of an
RTD given runtime observations. In an empirical study involving five algorithms
for SAT solving and AI planning, we show that neural networks predict the true
RTDs of unseen instances better than previous methods, and can even do so when
only few runtime observations are available per training instance
On the Effective Configuration of Planning Domain Models
The development of domain-independent planners
within the AI Planning community is leading to
“off the shelf” technology that can be used in a
wide range of applications. Moreover, it allows a
modular approach – in which planners and domain
knowledge are modules of larger software applications – that facilitates substitutions or improvements of individual modules without changing the rest of the system. This approach also supports the use of reformulation and configuration techniques, which transform how a model is represented in order to improve the efficiency of plan generation.
In this paper, we investigate how the performance
of planners is affected by domain model configuration. We introduce a fully automated method for this configuration task, and show in an extensive experimental analysis with six planners and seven domains that this process (which can, in principle, be combined with other forms of reformulation and configuration) can have a remarkable impact on performance across planners. Furthermore, studying the obtained domain model configurations can provide useful information to effectively engineer planning domain models
- …
