1,907 research outputs found
Dictionary learning with large step gradient descent for sparse representations
This is the accepted version of an article published in Lecture Notes in Computer Science Volume 7191, 2012, pp 231-238. The final publication is available at link.springer.com
http://www.springerlink.com/content/l1k4514765283618
Simultaneous sparse approximation via greedy pursuit
A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection. An important generalization is simultaneous sparse approximation. Now one must approximate several input signals at once using different linear combinations of the same T elementary signals. This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise. A new approach to this problem is presented here: a greedy pursuit algorithm called simultaneous orthogonal matching pursuit. The paper proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error. This result requires that the collection of elementary signals be weakly correlated, a property that is also known as incoherence. Numerical experiments demonstrate that the algorithm often succeeds, even when the inputs do not meet the hypotheses of the proof
Algorithmic linear dimension reduction in the l_1 norm for sparse vectors
This paper develops a new method for recovering m-sparse signals that is
simultaneously uniform and quick. We present a reconstruction algorithm whose
run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal.
The reconstruction error is within a logarithmic factor (in m) of the optimal
m-term approximation error in l_1. In particular, the algorithm recovers
m-sparse signals perfectly and noisy signals are recovered with polylogarithmic
distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a
logarithmic factor of optimal. We also present a small-space implementation of
the algorithm. These sketching techniques and the corresponding reconstruction
algorithms provide an algorithmic dimension reduction in the l_1 norm. In
particular, vectors of support m in dimension d can be linearly embedded into
O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a
vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)).
Furthermore, this reconstruction is stable and robust under small
perturbations
Improved sparse approximation over quasi-incoherent dictionaries
This paper discusses a new greedy algorithm for solving the sparse approximation problem over quasi-incoherent dictionaries. These dictionaries consist of waveforms that are uncorrelated "on average," and they provide a natural generalization of incoherent dictionaries. The algorithm provides strong guarantees on the quality of the approximations it produces, unlike most other methods for sparse approximation. Moreover, very efficient implementations are possible via approximate nearest-neighbor data structure
A low-order decomposition of turbulent channel flow via resolvent analysis and convex optimization
We combine resolvent-mode decomposition with techniques from convex
optimization to optimally approximate velocity spectra in a turbulent channel.
The velocity is expressed as a weighted sum of resolvent modes that are
dynamically significant, non-empirical, and scalable with Reynolds number. To
optimally represent DNS data at friction Reynolds number , we determine
the weights of resolvent modes as the solution of a convex optimization
problem. Using only modes per wall-parallel wavenumber pair and temporal
frequency, we obtain close agreement with DNS-spectra, reducing the wall-normal
and temporal resolutions used in the simulation by three orders of magnitude
On the linear independence of spikes and sines
The purpose of this work is to survey what is known about the linear
independence of spikes and sines. The paper provides new results for the case
where the locations of the spikes and the frequencies of the sines are chosen
at random. This problem is equivalent to studying the spectral norm of a random
submatrix drawn from the discrete Fourier transform matrix. The proof involves
depends on an extrapolation argument of Bourgain and Tzafriri.Comment: 16 pages, 4 figures. Revision with new proof of major theorem
Sparsity and Incoherence in Compressive Sampling
We consider the problem of reconstructing a sparse signal from a
limited number of linear measurements. Given randomly selected samples of
, where is an orthonormal matrix, we show that minimization
recovers exactly when the number of measurements exceeds where is the number of
nonzero components in , and is the largest entry in properly
normalized: . The smaller ,
the fewer samples needed.
The result holds for ``most'' sparse signals supported on a fixed (but
arbitrary) set . Given , if the sign of for each nonzero entry on
and the observed values of are drawn at random, the signal is
recovered with overwhelming probability. Moreover, there is a sense in which
this is nearly optimal since any method succeeding with the same probability
would require just about this many samples
Analysis of Basis Pursuit Via Capacity Sets
Finding the sparsest solution for an under-determined linear system
of equations is of interest in many applications. This problem is
known to be NP-hard. Recent work studied conditions on the support size of
that allow its recovery using L1-minimization, via the Basis Pursuit
algorithm. These conditions are often relying on a scalar property of
called the mutual-coherence. In this work we introduce an alternative set of
features of an arbitrarily given , called the "capacity sets". We show how
those could be used to analyze the performance of the basis pursuit, leading to
improved bounds and predictions of performance. Both theoretical and numerical
methods are presented, all using the capacity values, and shown to lead to
improved assessments of the basis pursuit success in finding the sparest
solution of
Differential behavioral state-dependence in the burst properties of CA3 and CA1 neurons
Brief bursts of fast high-frequency action potentials are a signature characteristic of CA3 and CA1 pyramidal neurons. Understanding the factors determining burst and single spiking is potentially significant for sensory representation, synaptic plasticity and epileptogenesis. A variety of models suggest distinct functional roles for burst discharge, and for specific characteristics of the burst in neural coding. However, little in vivo data demonstrate how often and under what conditions CA3 and CA1 actually exhibit burst and single spike discharges. The present study examined burst discharge and single spiking of CA3 and CA1 neurons across distinct behavioral states (awake-immobility and maze-running) in rats. In both CA3 and CA1 spike bursts accounted for less than 20% of all spike events. CA3 neurons exhibited more spikes per burst, greater spike frequency, larger amplitude spikes and more spike amplitude attenuation than CA1 neurons. A major finding of the present study is that the propensity of CA1 neurons to burst was affected by behavioral state, while the propensity of CA3 to burst was not. CA1 neurons exhibited fewer bursts during maze running compared with awake-immobility. In contrast, there were no differences in burst discharge of CA3 neurons. Neurons in both subregions exhibited smaller spike amplitude, fewer spikes per burst, longer inter-spike intervals and greater spike amplitude attenuation within a burst during awake-immobility compared with maze running. These findings demonstrate that the CA1 network is under greater behavioral state-dependent regulation than CA3. The present findings should inform both theoretic and computational models of CA3 and CA1 function. © 2006 IBRO
Restricted Isometries for Partial Random Circulant Matrices
In the theory of compressed sensing, restricted isometry analysis has become
a standard tool for studying how efficiently a measurement matrix acquires
information about sparse and compressible signals. Many recovery algorithms are
known to succeed when the restricted isometry constants of the sampling matrix
are small. Many potential applications of compressed sensing involve a
data-acquisition process that proceeds by convolution with a random pulse
followed by (nonrandom) subsampling. At present, the theoretical analysis of
this measurement technique is lacking. This paper demonstrates that the th
order restricted isometry constant is small when the number of samples
satisfies , where is the length of the pulse.
This bound improves on previous estimates, which exhibit quadratic scaling
- …
