816 research outputs found
Positive Definite Kernels in Machine Learning
This survey is an introduction to positive definite kernels and the set of
methods they have inspired in the machine learning literature, namely kernel
methods. We first discuss some properties of positive definite kernels as well
as reproducing kernel Hibert spaces, the natural extension of the set of
functions associated with a kernel defined
on a space . We discuss at length the construction of kernel
functions that take advantage of well-known statistical models. We provide an
overview of numerous data-analysis methods which take advantage of reproducing
kernel Hilbert spaces and discuss the idea of combining several kernels to
improve the performance on certain tasks. We also provide a short cookbook of
different kernels which are particularly useful for certain data-types such as
images, graphs or speech segments.Comment: draft. corrected a typo in figure
A Smoothed Dual Approach for Variational Wasserstein Problems
Variational problems that involve Wasserstein distances have been recently
proposed to summarize and learn from probability measures. Despite being
conceptually simple, such problems are computationally challenging because they
involve minimizing over quantities (Wasserstein distances) that are themselves
hard to compute. We show that the dual formulation of Wasserstein variational
problems introduced recently by Carlier et al. (2014) can be regularized using
an entropic smoothing, which leads to smooth, differentiable, convex
optimization problems that are simpler to implement and numerically more
stable. We illustrate the versatility of this approach by applying it to the
computation of Wasserstein barycenters and gradient flows of spacial
regularization functionals
Sliced Wasserstein Kernel for Persistence Diagrams
Persistence diagrams (PDs) play a key role in topological data analysis
(TDA), in which they are routinely used to describe topological properties of
complicated shapes. PDs enjoy strong stability properties and have proven their
utility in various learning contexts. They do not, however, live in a space
naturally endowed with a Hilbert structure and are usually compared with
specific distances, such as the bottleneck distance. To incorporate PDs in a
learning pipeline, several kernels have been proposed for PDs with a strong
emphasis on the stability of the RKHS distance w.r.t. perturbations of the PDs.
In this article, we use the Sliced Wasserstein approximation SW of the
Wasserstein distance to define a new kernel for PDs, which is not only provably
stable but also provably discriminative (depending on the number of points in
the PDs) w.r.t. the Wasserstein distance between PDs. We also demonstrate
its practicality, by developing an approximation technique to reduce kernel
computation time, and show that our proposal compares favorably to existing
kernels for PDs on several benchmarks.Comment: Minor modification
Systematic biases in human heading estimation.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion
- …
