1,666 research outputs found

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via 1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact 1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the 1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    User-friendly tail bounds for sums of random matrices

    Get PDF
    This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.Comment: Current paper is the version of record. The material on Freedman's inequality has been moved to a separate note; other martingale bounds are described in Caltech ACM Report 2011-0

    Compressed sensing for wide-field radio interferometric imaging

    Full text link
    For the next generation of radio interferometric telescopes it is of paramount importance to incorporate wide field-of-view (WFOV) considerations in interferometric imaging, otherwise the fidelity of reconstructed images will suffer greatly. We extend compressed sensing techniques for interferometric imaging to a WFOV and recover images in the spherical coordinate space in which they naturally live, eliminating any distorting projection. The effectiveness of the spread spectrum phenomenon, highlighted recently by one of the authors, is enhanced when going to a WFOV, while sparsity is promoted by recovering images directly on the sphere. Both of these properties act to improve the quality of reconstructed interferometric images. We quantify the performance of compressed sensing reconstruction techniques through simulations, highlighting the superior reconstruction quality achieved by recovering interferometric images directly on the sphere rather than the plane.Comment: 15 pages, 8 figures, replaced to match version accepted by MNRA

    On the shopfloor: exploring the impact of teacher trade unions on school-based industrial relations

    Get PDF
    Teachers are highly unionised workers and their trade unions exert an important influence on the shaping and implementation of educational policy. Despite this importance there is relatively little analysis of the impact of teacher trade unions in educational management literature. Very little empirical research has sought to establish the impact of teacher unions at school level. In an era of devolved management and quasi-markets this omission is significant. New personnel issues continue to emerge at school level and this may well generate increased trade union activity at the workplace. This article explores the extent to which devolved management is drawing school-based union representation into a more prominent role. It argues that whilst there can be significant differences between individual schools, increased school autonomy is raising the profile of trade union activity in the workplace, and this needs to be better reflected in educational management research

    Restricted Isometries for Partial Random Circulant Matrices

    Get PDF
    In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the ssth order restricted isometry constant is small when the number mm of samples satisfies m(slogn)3/2m \gtrsim (s \log n)^{3/2}, where nn is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling

    Ethnic In-Group Favoritism Among Minority and Majority Groups: Testing the Self-Esteem Hypothesis Among Preadolescents

    Get PDF
    The self-esteem hypothesis in intergroup relations, as proposed by social identity theory (SIT), states that successful intergroup discrimination enhances momentary collective self-esteem. This hypothesis is a source of continuing controversy. Furthermore, although SIT is increasingly used to account for children’s group attitudes, few studies have examined the hypothesis among children. In addition, the hypothesis’s generality makes it important to study among children from different ethnic groups. The present study, conducted among Dutch and Turkish preadolescents, examined momentary collective self-feelings as a consequence of ethnic group evaluations. The results tended to support the self-esteem hypothesis. In-group favoritism was found to have a self-enhancing effect among participants high in ethnic identification. This result was found for ethnic majority (Dutch) and minority (Turkish) participants.

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore