597 research outputs found
The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding vs. Dynamic Range
Compressive sensing (CS) exploits the sparsity present in many signals to
reduce the number of measurements needed for digital acquisition. With this
reduction would come, in theory, commensurate reductions in the size, weight,
power consumption, and/or monetary cost of both signal sensors and any
associated communication links. This paper examines the use of CS in the design
of a wideband radio receiver in a noisy environment. We formulate the problem
statement for such a receiver and establish a reasonable set of requirements
that a receiver should meet to be practically useful. We then evaluate the
performance of a CS-based receiver in two ways: via a theoretical analysis of
its expected performance, with a particular emphasis on noise and dynamic
range, and via simulations that compare the CS receiver against the performance
expected from a conventional implementation. On the one hand, we show that
CS-based systems that aim to reduce the number of acquired measurements are
somewhat sensitive to signal noise, exhibiting a 3dB SNR loss per octave of
subsampling, which parallels the classic noise-folding phenomenon. On the other
hand, we demonstrate that since they sample at a lower rate, CS-based systems
can potentially attain a significantly larger dynamic range. Hence, we conclude
that while a CS-based system has inherent limitations that do impose some
restrictions on its potential applications, it also has attributes that make it
highly desirable in a number of important practical settings
Measurements design and phenomena discrimination
The construction of measurements suitable for discriminating signal
components produced by phenomena of different types is considered. The required
measurements should be capable of cancelling out those signal components which
are to be ignored when focusing on a phenomenon of interest. Under the
hypothesis that the subspaces hosting the signal components produced by each
phenomenon are complementary, their discrimination is accomplished by
measurements giving rise to the appropriate oblique projector operator. The
subspace onto which the operator should project is selected by nonlinear
techniques in line with adaptive pursuit strategies
Sparsity and Incoherence in Compressive Sampling
We consider the problem of reconstructing a sparse signal from a
limited number of linear measurements. Given randomly selected samples of
, where is an orthonormal matrix, we show that minimization
recovers exactly when the number of measurements exceeds where is the number of
nonzero components in , and is the largest entry in properly
normalized: . The smaller ,
the fewer samples needed.
The result holds for ``most'' sparse signals supported on a fixed (but
arbitrary) set . Given , if the sign of for each nonzero entry on
and the observed values of are drawn at random, the signal is
recovered with overwhelming probability. Moreover, there is a sense in which
this is nearly optimal since any method succeeding with the same probability
would require just about this many samples
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
The road to deterministic matrices with the restricted isometry property
The restricted isometry property (RIP) is a well-known matrix condition that
provides state-of-the-art reconstruction guarantees for compressed sensing.
While random matrices are known to satisfy this property with high probability,
deterministic constructions have found less success. In this paper, we consider
various techniques for demonstrating RIP deterministically, some popular and
some novel, and we evaluate their performance. In evaluating some techniques,
we apply random matrix theory and inadvertently find a simple alternative proof
that certain random matrices are RIP. Later, we propose a particular class of
matrices as candidates for being RIP, namely, equiangular tight frames (ETFs).
Using the known correspondence between real ETFs and strongly regular graphs,
we investigate certain combinatorial implications of a real ETF being RIP.
Specifically, we give probabilistic intuition for a new bound on the clique
number of Paley graphs of prime order, and we conjecture that the corresponding
ETFs are RIP in a manner similar to random matrices.Comment: 24 page
Effects of beta-alanine supplementation on brain homocarnosine/carnosine signal and cognitive function: an exploratory study
Objectives: Two independent studies were conducted to examine the effects of 28 d of beta-alanine supplementation at 6.4 g d-1 on brain homocarnosine/carnosine signal in omnivores and vegetarians (Study 1) and on cognitive function before and after exercise in trained cyclists (Study 2). Methods: In Study 1, seven healthy vegetarians (3 women and 4 men) and seven age- and sex-matched omnivores undertook a brain 1H-MRS exam at baseline and after beta-alanine supplementation. In study 2, nineteen trained male cyclists completed four 20-Km cycling time trials (two pre supplementation and two post supplementation), with a battery of cognitive function tests (Stroop test, Sternberg paradigm, Rapid Visual Information Processing task) being performed before and after exercise on each occasion. Results: In Study 1, there were no within-group effects of beta-alanine supplementation on brain homocarnosine/carnosine signal in either vegetarians (p = 0.99) or omnivores (p = 0.27); nor was there any effect when data from both groups were pooled (p = 0.19). Similarly, there was no group by time interaction for brain homocarnosine/carnosine signal (p = 0.27). In study 2, exercise improved cognitive function across all tests (P0.05) of beta-alanine supplementation on response times or accuracy for the Stroop test, Sternberg paradigm or RVIP task at rest or after exercise. Conclusion: 28 d of beta-alanine supplementation at 6.4g d-1 appeared not to influence brain homocarnosine/ carnosine signal in either omnivores or vegetarians; nor did it influence cognitive function before or after exercise in trained cyclists
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation
We study classic streaming and sparse recovery problems using deterministic
linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the
latter also being known as l1-heavy hitters), norm estimation, and approximate
inner product. We focus on devising a fixed matrix A in R^{m x n} and a
deterministic recovery/estimation procedure which work for all possible input
vectors simultaneously. Our results improve upon existing work, the following
being our main contributions:
* A proof that linf/l1 sparse recovery and inner product estimation are
equivalent, and that incoherent matrices can be used to solve both problems.
Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log
n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms
by making use of the Fast Johnson-Lindenstrauss transform. Both our running
times and number of measurements improve upon previous work. We can also obtain
better error guarantees than previous work in terms of a smaller tail of the
input vector.
* A new lower bound for the number of linear measurements required to solve
l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are
required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where
x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude.
* A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of
measurements required to solve deterministic norm estimation, i.e., to recover
|x|_2 +/- eps|x|_1.
For all the problems we study, tight bounds are already known for the
randomized complexity from previous work, except in the case of l1/l1 sparse
recovery, where a nearly tight bound is known. Our work thus aims to study the
deterministic complexities of these problems
Large-scale analysis of frequency modulation in birdsong data bases
DS & MP are supported by an EPSRC Leadership Fellowship EP/G007144/1. Our thanks to Alan McElligott for helpful advice while preparing the manuscript; Sašo Muševič for discussion and for making his DDM software available; and Rémi Gribonval and team at INRIA Rennes for discussion and software development during a research visit
Recommended from our members
Missing steps in a staircase: a qualitative study of the perspectives of key stakeholders on the use of adaptive designs in confirmatory trials
Background
Despite the promising benefits of adaptive designs (ADs), their routine use, especially in confirmatory trials, is lagging behind the prominence given to them in the statistical literature. Much of the previous research to understand barriers and potential facilitators to the use of ADs has been driven from a pharmaceutical drug development perspective, with little focus on trials in the public sector. In this paper, we explore key stakeholders’ experiences, perceptions and views on barriers and facilitators to the use of ADs in publicly funded confirmatory trials.
Methods
Semi-structured, in-depth interviews of key stakeholders in clinical trials research (CTU directors, funding board and panel members, statisticians, regulators, chief investigators, data monitoring committee members and health economists) were conducted through telephone or face-to-face sessions, predominantly in the UK. We purposively selected participants sequentially to optimise maximum variation in views and experiences. We employed the framework approach to analyse the qualitative data.
Results
We interviewed 27 participants. We found some of the perceived barriers to be: lack of knowledge and experience coupled with paucity of case studies, lack of applied training, degree of reluctance to use ADs, lack of bridge funding and time to support design work, lack of statistical expertise, some anxiety about the impact of early trial stopping on researchers’ employment contracts, lack of understanding of acceptable scope of ADs and when ADs are appropriate, and statistical and practical complexities. Reluctance to use ADs seemed to be influenced by: therapeutic area, unfamiliarity, concerns about their robustness in decision-making and acceptability of findings to change practice, perceived complexities and proposed type of AD, among others.
Conclusions
There are still considerable multifaceted, individual and organisational obstacles to be addressed to improve uptake, and successful implementation of ADs when appropriate. Nevertheless, inferred positive change in attitudes and receptiveness towards the appropriate use of ADs by public funders are supportive and are a stepping stone for the future utilisation of ADs by researchers
- …
