396 research outputs found

    Particle approximations of the score and observed information matrix for parameter estimation in state space models with linear computational cost

    Full text link
    Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation, to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao-Blackwellisation. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost. Supplementary materials including code are available.Comment: Accepted to Journal of Computational and Graphical Statistic

    Control variates for stochastic gradient MCMC

    Get PDF
    It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popular class of methods for solving this issue is stochastic gradient MCMC (SGMCMC). These methods use a noisy estimate of the gradient of the log-posterior, which reduces the per iteration computational cost of the algorithm. Despite this, there are a number of results suggesting that stochastic gradient Langevin dynamics (SGLD), probably the most popular of these methods, still has computational cost proportional to the dataset size. We suggest an alternative log-posterior gradient estimate for stochastic gradient MCMC which uses control variates to reduce the variance. We analyse SGLD using this gradient estimate, and show that, under log-concavity assumptions on the target distribution, the computational cost required for a given level of accuracy is independent of the dataset size. Next we show that a different control variate technique, known as zero variance control variates, can be applied to SGMCMC algorithms for free. This post-processing step improves the inference of the algorithm by reducing the variance of the MCMC output. Zero variance control variates rely on the gradient of the log-posterior; we explore how the variance reduction is affected by replacing this with the noisy gradient estimate calculated by SGMCMC

    Get Real: The Need for Effective Design Research

    Get PDF
    Designers use intuition in order to envision possibilities. That strength also contains a weakness: a disinclination to account for what exists in reality. That prevents design from evolving into the powerful role that it could otherwise be. Learning about reality requires the tools that are necessary to perform research such as theory and methods. Research tools are essential in order to support an opinion or position, to build design solutions in technically challenging application areas, or to advance design as a leadership role instead of a support role. Better understanding and use of research would enable the designer to evolve from craft-bound artisan toward professional. This essay addresses recent influences on design practice, the opportunity for design to evolve in a professional direction and the methods that will support that evolution

    An integrated circuit for chip-based analysis of enzyme kinetics and metabolite quantification

    Get PDF
    We have created a novel chip-based diagnostic tools based upon quantification of metabolites using enzymes specific for their chemical conversion. Using this device we show for the first time that a solid-state circuit can be used to measure enzyme kinetics and calculate the Michaelis-Menten constant. Substrate concentration dependency of enzyme reaction rates is central to this aim. Ion-sensitive field effect transistors (ISFET) are excellent transducers for biosensing applications that are reliant upon enzyme assays, especially since they can be fabricated using mainstream microelectronics technology to ensure low unit cost, mass-manufacture, scaling to make many sensors and straightforward miniaturisation for use in point-of-care devices. Here, we describe an integrated ISFET array comprising 216 sensors. The device was fabricated with a complementary metal oxide semiconductor (CMOS) process. Unlike traditional CMOS ISFET sensors that use the Si3N4 passivation of the foundry for ion detection, the device reported here was processed with a layer of Ta2O5 that increased the detection sensitivity to 45 mV/pH unit at the sensor readout. The drift was reduced to 0.8 mV/hour with a linear pH response between pH 2 – 12. A high-speed instrumentation system capable of acquiring nearly 500 fps was developed to stream out the data. The device was then used to measure glucose concentration through the activity of hexokinase in the range of 0.05 mM – 231 mM, encompassing glucose’s physiological range in blood. Localised and temporal enzyme kinetics of hexokinase was studied in detail. These results present a roadmap towards a viable personal metabolome machine

    Transport Elliptical Slice Sampling

    Full text link
    We propose a new framework for efficiently sampling from complex probability distributions using a combination of normalizing flows and elliptical slice sampling (Murray et al., 2010). The central idea is to learn a diffeomorphism, through normalizing flows, that maps the non-Gaussian structure of the target distribution to an approximately Gaussian distribution. We then use the elliptical slice sampler, an efficient and tuning-free Markov chain Monte Carlo (MCMC) algorithm, to sample from the transformed distribution. The samples are then pulled back using the inverse normalizing flow, yielding samples that approximate the stationary target distribution of interest. Our transport elliptical slice sampler (TESS) is optimized for modern computer architectures, where its adaptation mechanism utilizes parallel cores to rapidly run multiple Markov chains for a few iterations. Numerical demonstrations show that TESS produces Monte Carlo samples from the target distribution with lower autocorrelation compared to non-transformed samplers, and demonstrates significant improvements in efficiency when compared to gradient-based proposals designed for parallel computer architectures, given a flexible enough diffeomorphism

    Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates

    Full text link
    In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.Comment: ICML 202
    corecore