20,873 research outputs found

    On sparsity averaging

    Get PDF
    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted 1\ell_1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.Comment: 4 pages, 3 figures, Proceedings of 10th International Conference on Sampling Theory and Applications (SampTA), Code available at https://github.com/basp-group/sopt, Full journal letter available at http://arxiv.org/abs/arXiv:1208.233

    PURIFY: a new approach to radio-interferometric imaging

    Get PDF
    In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a minimization problem for image reconstruction. This approach was shown, in theory and through simulations in a simple discrete visibility setting, to have the potential to outperform significantly CLEAN and its evolutions. In this work, we leverage the versatility of convex optimization in solving minimization problems to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted relies on the simultaneous-direction method of multipliers (SDMM), and contrasts with the current major-minor cycle structure of CLEAN and its evolutions, which in particular cannot handle the state-of-the-art minimization problems under consideration where neither the regularization term nor the data term are differentiable functions. We release a beta version of an SDMM-based imaging software written in C and dubbed PURIFY (http://basp-group.github.io/purify/) that handles various sparsity priors, including our recent average sparsity approach SARA. We evaluate the performance of different priors through simulations in the continuous visibility setting, confirming the superiority of SARA

    Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems

    Full text link
    Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.Comment: Accepted for publication at IEEE Journal of Biomedical and Health Informatic

    Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information

    Full text link
    Commonly employed reconstruction algorithms in compressed sensing (CS) use the L2L_2 norm as the metric for the residual error. However, it is well-known that least squares (LS) based estimators are highly sensitive to outliers present in the measurement vector leading to a poor performance when the noise no longer follows the Gaussian assumption but, instead, is better characterized by heavier-than-Gaussian tailed distributions. In this paper, we propose a robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse signals in the presence of impulsive noise. To address this problem, we use a Lorentzian cost function instead of the L2L_2 cost function employed by the traditional IHT algorithm. We also modify the algorithm to incorporate prior signal information in the recovery process. Specifically, we study the case of CS with partially known support. The proposed algorithm is a fast method with computational load comparable to the LS based IHT, whilst having the advantage of robustness against heavy-tailed impulsive noise. Sufficient conditions for stability are studied and a reconstruction error bound is derived. We also derive sufficient conditions for stable sparse signal recovery with partially known support. Theoretical analysis shows that including prior support information relaxes the conditions for successful reconstruction. Simulation results demonstrate that the Lorentzian-based IHT algorithm significantly outperform commonly employed sparse reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments. Numerical results also demonstrate that the partially known support inclusion improves the performance of the proposed algorithm, thereby requiring fewer samples to yield an approximate reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal Processin
    corecore