528 research outputs found

    Adaptive Detection of Structured Signals in Low-Rank Interference

    Full text link
    In this paper, we consider the problem of detecting the presence (or absence) of an unknown but structured signal from the space-time outputs of an array under strong, non-white interference. Our motivation is the detection of a communication signal in jamming, where often the training portion is known but the data portion is not. We assume that the measurements are corrupted by additive white Gaussian noise of unknown variance and a few strong interferers, whose number, powers, and array responses are unknown. We also assume the desired signals array response is unknown. To address the detection problem, we propose several GLRT-based detection schemes that employ a probabilistic signal model and use the EM algorithm for likelihood maximization. Numerical experiments are presented to assess the performance of the proposed schemes

    Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing

    Full text link
    For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing. We are particularly motivated by problems where the number of features greatly exceeds the number of training examples, but where only a few features suffice for accurate classification. We show that sum-product GAMP can be used to (approximately) minimize the classification error rate and max-sum GAMP can be used to minimize a wide variety of regularized loss functions. Furthermore, we describe an expectation-maximization (EM)-based scheme to learn the associated model parameters online, as an alternative to cross-validation, and we show that GAMP's state-evolution framework can be used to accurately predict the misclassification rate. Finally, we present a detailed numerical study to confirm the accuracy, speed, and flexibility afforded by our GAMP-based approaches to binary linear classification and feature selection

    Sparse Multinomial Logistic Regression via Approximate Message Passing

    Full text link
    For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR). First, we propose two algorithms based on the Hybrid Generalized Approximate Message Passing (HyGAMP) framework: one finds the maximum a posteriori (MAP) linear classifier and the other finds an approximation of the test-error-rate minimizing linear classifier. Then we design computationally simplified variants of these two algorithms. Next, we detail methods to tune the hyperparameters of their assumed statistical models using Stein's unbiased risk estimate (SURE) and expectation-maximization (EM), respectively. Finally, using both synthetic and real-world datasets, we demonstrate improved error-rate and runtime performance relative to existing state-of-the-art approaches to sparse MLR

    Parametric Bilinear Generalized Approximate Message Passing

    Full text link
    We propose a scheme to estimate the parameters bib_i and cjc_j of the bilinear form zm=i,jbizm(i,j)cjz_m=\sum_{i,j} b_i z_m^{(i,j)} c_j from noisy measurements {ym}m=1M\{y_m\}_{m=1}^M, where ymy_m and zmz_m are related through an arbitrary likelihood function and zm(i,j)z_m^{(i,j)} are known. Our scheme is based on generalized approximate message passing (G-AMP): it treats bib_i and cjc_j as random variables and zm(i,j)z_m^{(i,j)} as an i.i.d.\ Gaussian 3-way tensor in order to derive a tractable simplification of the sum-product algorithm in the large-system limit. It generalizes previous instances of bilinear G-AMP, such as those that estimate matrices B\boldsymbol{B} and C\boldsymbol{C} from a noisy measurement of Z=BC\boldsymbol{Z}=\boldsymbol{BC}, allowing the application of AMP methods to problems such as self-calibration, blind deconvolution, and matrix compressive sensing. Numerical experiments confirm the accuracy and computational efficiency of the proposed approach

    Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing

    Full text link
    In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
    corecore