2,621 research outputs found

    A Levinson-Galerkin algorithm for regularized trigonometric approximation

    Full text link
    Trigonometric polynomials are widely used for the approximation of a smooth function ff from a set of nonuniformly spaced samples {f(xj)}j=0N1\{f(x_j)\}_{j=0}^{N-1}. If the samples are perturbed by noise, controlling the smoothness of the trigonometric approximation becomes an essential issue to avoid overfitting and underfitting of the data. Using the polynomial degree as regularization parameter we derive a multi-level algorithm that iteratively adapts to the least squares solution of optimal smoothness. The proposed algorithm computes the solution in at most O(NM+M2)\cal{O}(NM + M^2) operations (MM being the polynomial degree of the approximation) by solving a family of nested Toeplitz systems. It is shown how the presented method can be extended to multivariate trigonometric approximation. We demonstrate the performance of the algorithm by applying it in echocardiography to the recovery of the boundary of the Left Ventricle

    Measure What Should be Measured: Progress and Challenges in Compressive Sensing

    Full text link
    Is compressive sensing overrated? Or can it live up to our expectations? What will come after compressive sensing and sparsity? And what has Galileo Galilei got to do with it? Compressive sensing has taken the signal processing community by storm. A large corpus of research devoted to the theory and numerics of compressive sensing has been published in the last few years. Moreover, compressive sensing has inspired and initiated intriguing new research directions, such as matrix completion. Potential new applications emerge at a dazzling rate. Yet some important theoretical questions remain open, and seemingly obvious applications keep escaping the grip of compressive sensing. In this paper I discuss some of the recent progress in compressive sensing and point out key challenges and opportunities as the area of compressive sensing and sparse representations keeps evolving. I also attempt to assess the long-term impact of compressive sensing

    Almost Eigenvalues and Eigenvectors of Almost Mathieu Operators

    Full text link
    The almost Mathieu operator is the discrete Schr\"odinger operator Hα,β,θH_{\alpha,\beta,\theta} on 2(Z)\ell^2(\mathbb{Z}) defined via (Hα,β,θf)(k)=f(k+1)+f(k1)+βcos(2παk+θ)f(k)(H_{\alpha,\beta,\theta}f)(k) = f(k + 1) + f(k - 1) + \beta \cos(2\pi \alpha k + \theta) f(k). We derive explicit estimates for the eigenvalues at the edge of the spectrum of the finite-dimensional almost Mathieu operator. We furthermore show that the (properly rescaled) mm-th Hermite function ϕm\phi_m is an approximate eigenvector of this operator, and that it satisfies the same properties that characterize the true eigenvector associated to the mm-th largest eigenvalue. Moreover, a properly translated and modulated version of ϕm\phi_m is also an approximate eigenvector of this operator, and it satisfies the properties that characterize the true eigenvector associated to the mm-th largest (in modulus) negative eigenvalue. The results hold at the edge of the spectrum, for any choice of θ\theta and under very mild conditions on α\alpha and β\beta. We also give precise estimates for the size of the "edge", and extend some of our results to the infinite dimensional case. The ingredients for our proofs comprise Taylor expansions, basic time-frequency analysis, Sturm sequences, and perturbation theory for eigenvalues and eigenvectors. Numerical simulations demonstrate the tight fit of the theoretical estimates

    Performance Analysis of Spectral Clustering on Compressed, Incomplete and Inaccurate Measurements

    Full text link
    Spectral clustering is one of the most widely used techniques for extracting the underlying global structure of a data set. Compressed sensing and matrix completion have emerged as prevailing methods for efficiently recovering sparse and partially observed signals respectively. We combine the distance preserving measurements of compressed sensing and matrix completion with the power of robust spectral clustering. Our analysis provides rigorous bounds on how small errors in the affinity matrix can affect the spectral coordinates and clusterability. This work generalizes the current perturbation results of two-class spectral clustering to incorporate multi-class clustering with k eigenvectors. We thoroughly track how small perturbation from using compressed sensing and matrix completion affect the affinity matrix and in succession the spectral coordinates. These perturbation results for multi-class clustering require an eigengap between the kth and (k+1)th eigenvalues of the affinity matrix, which naturally occurs in data with k well-defined clusters. Our theoretical guarantees are complemented with numerical results along with a number of examples of the unsupervised organization and clustering of image data

    Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing

    Full text link
    We study the question of extracting a sequence of functions {fi,gi}i=1s\{\boldsymbol{f}_i, \boldsymbol{g}_i\}_{i=1}^s from observing only the sum of their convolutions, i.e., from y=i=1sfigi\boldsymbol{y} = \sum_{i=1}^s \boldsymbol{f}_i\ast \boldsymbol{g}_i. While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper, we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM

    Inverse-Closedness of a Banach Algebra of Integral Operators on the Heisenberg Group

    Full text link
    Let H\mathbb{H} be the general, reduced Heisenberg group. Our main result establishes the inverse-closedness of a class of integral operators acting on Lp(H)L^{p}(\mathbb{H}), given by the off-diagonal decay of the kernel. As a consequence of this result, we show that if α1I+Sf\alpha_{1}I+S_{f}, where SfS_{f} is the operator given by convolution with ff, fLv1(H)f\in L^{1}_{v}(\mathbb{H}), is invertible in \B(L^{p}(\mathbb{H})), then (\alpha_{1}I+S_{f})^{-1}=\alpha_{2}I+S_{g},and, and g\in L^{1}_{v}(\mathbb{H})$. We prove analogous results for twisted convolution operators and apply the latter results to a class of Weyl pseudodifferential operators. We briefly discuss relevance to mobile communications.Comment: This version corrects two mistakes and recognizes the work of other authors related to a corollary of our main theore

    Accurate detection of moving targets via random sensor arrays and Kerdock codes

    Full text link
    The detection and parameter estimation of moving targets is one of the most important tasks in radar. Arrays of randomly distributed antennas have been popular for this purpose for about half a century. Yet, surprisingly little rigorous mathematical theory exists for random arrays that addresses fundamental question such as how many targets can be recovered, at what resolution, at which noise level, and with which algorithm. In a different line of research in radar, mathematicians and engineers have invested significant effort into the design of radar transmission waveforms which satisfy various desirable properties. In this paper we bring these two seemingly unrelated areas together. Using tools from compressive sensing we derive a theoretical framework for the recovery of targets in the azimuth-range-Doppler domain via random antennas arrays. In one manifestation of our theory we use Kerdock codes as transmission waveforms and exploit some of their peculiar properties in our analysis. Our paper provides two main contributions: (i) We derive the first rigorous mathematical theory for the detection of moving targets using random sensor arrays. (ii) The transmitted waveforms satisfy a variety of properties that are very desirable and important from a practical viewpoint. Thus our approach does not just lead to useful theoretical insights, but is also of practical importance. Various extensions of our results are derived and numerical simulations confirming our theory are presented
    corecore