2,304 research outputs found
What is...a Curvelet?
Energized by the success of wavelets, the last two
decades saw the rapid development of a new field,
computational harmonic analysis, which aims to develop new systems for effectively representing phenomena of scientific interest. The curvelet transform is a recent addition to the family of mathematical tools this community enthusiastically builds up. In short, this is a new multiscale transform with strong directional character in which elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling
principle length^2 ~ width
Compressed Sensing with off-axis frequency-shifting holography
This work reveals an experimental microscopy acquisition scheme successfully
combining Compressed Sensing (CS) and digital holography in off-axis and
frequency-shifting conditions. CS is a recent data acquisition theory involving
signal reconstruction from randomly undersampled measurements, exploiting the
fact that most images present some compact structure and redundancy. We propose
a genuine CS-based imaging scheme for sparse gradient images, acquiring a
diffraction map of the optical field with holographic microscopy and recovering
the signal from as little as 7% of random measurements. We report experimental
results demonstrating how CS can lead to an elegant and effective way to
reconstruct images, opening the door for new microscopy applications.Comment: vol 35, pp 871-87
A fast and accurate first-order algorithm for compressed sensing
This paper introduces a new, fast and accurate algorithm
for solving problems in the area of compressed sensing,
and more generally, in the area of signal and image reconstruction
from indirect measurements. This algorithm
is inspired by recent progress in the development of novel
first-order methods in convex optimization, most notably
Nesterov’s smoothing technique. In particular, there is a
crucial property thatmakes thesemethods extremely efficient
for solving compressed sensing problems. Numerical
experiments show the promising performance of our
method to solve problems which involve the recovery of
signals spanning a large dynamic range
Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming
Given a linear system in a real or complex domain, linear regression aims to
recover the model parameters from a set of observations. Recent studies in
compressive sensing have successfully shown that under certain conditions, a
linear program, namely, l1-minimization, guarantees recovery of sparse
parameter signals even when the system is underdetermined. In this paper, we
consider a more challenging problem: when the phase of the output measurements
from a linear system is omitted. Using a lifting technique, we show that even
though the phase information is missing, the sparse signal can be recovered
exactly by solving a simple semidefinite program when the sampling rate is
sufficiently high, albeit the exact solutions to both sparse signal recovery
and phase retrieval are combinatorial. The results extend the type of
applications that compressive sensing can be applied to those where only output
magnitudes can be observed. We demonstrate the accuracy of the algorithms
through theoretical analysis, extensive simulations and a practical experiment.Comment: Parts of the derivations have submitted to the 16th IFAC Symposium on
System Identification, SYSID 2012, and parts to the 51st IEEE Conference on
Decision and Control, CDC 201
Sparse signal and image recovery from Compressive Samples
In this paper we present an introduction to Compressive Sampling
(CS), an emerging model-based framework for data acquisition
and signal recovery based on the premise that a signal
having a sparse representation in one basis can be reconstructed
from a small number of measurements collected in a
second basis that is incoherent with the first. Interestingly, a
random noise-like basis will suffice for the measurement process.
We will overview the basic CS theory, discuss efficient
methods for signal reconstruction, and highlight applications
in medical imaging
An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]
This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality.
Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications
Highly Robust Error Correction by Convex Programming
This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors).
We show that if one encodes the information as Ax where A ∈
ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well
Ridgelets and the representation of mutilated Sobolev functions
We show that ridgelets, a system introduced in [E. J. Candes, Appl. Comput. Harmon. Anal., 6(1999), pp. 197–218], are optimal to represent smooth multivariate functions that may exhibit linear singularities. For instance, let {u · x − b > 0} be an arbitrary hyperplane and consider the singular function f(x) = 1{u·x−b>0}g(x), where g is compactly supported with finite Sobolev L2 norm ||g||Hs, s > 0. The ridgelet coefficient sequence of such an object is as sparse as if f were without singularity, allowing optimal partial reconstructions. For instance, the n-term approximation obtained by keeping the terms corresponding to the n largest coefficients in the ridgelet series achieves a rate of approximation of order n−s/d; the presence of the singularity does not spoil the quality of the ridgelet approximation. This is unlike all systems currently in use, especially Fourier or wavelet representations
- …
