2,070 research outputs found

    Particle filter-based Gaussian process optimisation for parameter inference

    Full text link
    We propose a novel method for maximum likelihood-based parameter inference in nonlinear and/or non-Gaussian state space models. The method is an iterative procedure with three steps. At each iteration a particle filter is used to estimate the value of the log-likelihood function at the current parameter iterate. Using these log-likelihood estimates, a surrogate objective function is created by utilizing a Gaussian process model. Finally, we use a heuristic procedure to obtain a revised parameter iterate, providing an automatic trade-off between exploration and exploitation of the surrogate model. The method is profiled on two state space models with good performance both considering accuracy and computational cost.Comment: Accepted for publication in proceedings of the 19th World Congress of the International Federation of Automatic Control (IFAC), Cape Town, South Africa, August 2014. 6 pages, 4 figure

    Quasi-Newton particle Metropolis-Hastings

    Full text link
    Particle Metropolis-Hastings enables Bayesian parameter inference in general nonlinear state space models (SSMs). However, in many implementations a random walk proposal is used and this can result in poor mixing if not tuned correctly using tedious pilot runs. Therefore, we consider a new proposal inspired by quasi-Newton algorithms that may achieve similar (or better) mixing with less tuning. An advantage compared to other Hessian based proposals, is that it only requires estimates of the gradient of the log-posterior. A possible application is parameter inference in the challenging class of SSMs with intractable likelihoods. We exemplify this application and the benefits of the new proposal by modelling log-returns of future contracts on coffee by a stochastic volatility model with α\alpha-stable observations.Comment: 23 pages, 5 figures. Accepted for the 17th IFAC Symposium on System Identification (SYSID), Beijing, China, October 201

    Sequential Kernel Herding: Frank-Wolfe Optimization for Particle Filtering

    Get PDF
    Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure to obtain adaptive quadrature rules for integrals of functions in a reproducing kernel Hilbert space (RKHS) with a potentially faster rate of convergence than Monte Carlo integration (and "kernel herding" was shown to be a special case of this procedure). In this paper, we propose to replace the random sampling step in a particle filter by Frank-Wolfe optimization. By optimizing the position of the particles, we can obtain better accuracy than random or quasi-Monte Carlo sampling. In applications where the evaluation of the emission probabilities is expensive (such as in robot localization), the additional computational cost to generate the particles through optimization can be justified. Experiments on standard synthetic examples as well as on a robot localization task indicate indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.Comment: in 18th International Conference on Artificial Intelligence and Statistics (AISTATS), May 2015, San Diego, United States. 38, JMLR Workshop and Conference Proceeding

    “Tax Simplification”—Grave Threat to the Charitable Contribution Deduction: The Problem and a Proposed Solution

    Get PDF
    The present National Administration has continued to support proposed legislative changes aimed at substantially reducing the number of income tax returns in which deductions are itemized. The author contends that these tax simplification proposals are incompatible with the preservation of the charitable contribution deduction and would undermine the position of voluntary charitable organizations by reducing the incentives for giving. He proposes a solution to this dilemma by promoting the charitable contribution deduction, with certain limitations, to the position of a deduction from gross income, rather than a deduction from adjusted gross income

    Particle Metropolis-Hastings using gradient and Hessian information

    Full text link
    Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining Markov chain Monte Carlo (MCMC) and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles. We propose a number of alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning.Comment: 27 pages, 5 figures, 2 tables. The final publication is available at Springer via: http://dx.doi.org/10.1007/s11222-014-9510-

    Capacity estimation of two-dimensional channels using Sequential Monte Carlo

    Full text link
    We derive a new Sequential-Monte-Carlo-based algorithm to estimate the capacity of two-dimensional channel models. The focus is on computing the noiseless capacity of the 2-D one-infinity run-length limited constrained channel, but the underlying idea is generally applicable. The proposed algorithm is profiled against a state-of-the-art method, yielding more than an order of magnitude improvement in estimation accuracy for a given computation time

    Sequential Monte Carlo for Graphical Models

    Full text link
    We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential decomposition of the PGM we find a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. By targeting these auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it provides an unbiased estimate of the partition function of the model. We also show how it can be used within a particle Markov chain Monte Carlo framework in order to construct high-dimensional block-sampling algorithms for general PGMs

    Particle Gibbs with Ancestor Sampling

    Full text link
    Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a novel PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models

    Casi pratici di successo nel licensing di tecnologia

    Get PDF
    Lezione tenuta nell'ambito della seconda parte del seminario 'I contratti di licensing di tecnologia, esempi e casi pratici di successo', si è incentrata sui seguenti argomenti: casi pratici di successo nel licensing di tecnologia; il processo Spheripol, il caso Golden Trade e la strategia di licensing; il caso NicOx.2008-02-20Sardegna Ricerche, Edificio 2, Località Piscinamanna 09010 Pula (CA) - ItaliaI contratti di licensing di tecnologia, esempi e casi pratici di success
    corecore