478 research outputs found

    Nonnegative approximations of nonnegative tensors

    Get PDF
    We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naive Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative PARAFAC, will always have optimal solutions. The result holds for any choice of norms and, under a mild assumption, even Bregman divergences.Comment: 14 page

    On the typical rank of real binary forms

    Full text link
    We determine the rank of a general real binary form of degree d=4 and d=5. In the case d=5, the possible values of the rank of such general forms are 3,4,5. The existence of three typical ranks was unexpected. We prove that a real binary form of degree d with d real roots has rank d.Comment: 12 pages, 2 figure

    Multiarray Signal Processing: Tensor decomposition meets compressed sensing

    Get PDF
    We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on a measure of separation between radiating sources called coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of the tensor representing the signal. We also deduce a computationally feasible variant of Kruskal's uniqueness condition, where the coherence appears as a proxy for k-rank. Problems of sparsest recovery with an infinite continuous dictionary, lowest-rank tensor representation, and blind source separation are treated in a uniform fashion. The decomposition of the measurement tensor leads to simultaneous localization and extraction of radiating sources, in an entirely deterministic manner.Comment: 10 pages, 1 figur

    Blind Multilinear Identification

    Full text link
    We discuss a technique that allows blind recovery of signals or blind identification of mixtures in instances where such recovery or identification were previously thought to be impossible: (i) closely located or highly correlated sources in antenna array processing, (ii) highly correlated spreading codes in CDMA radio communication, (iii) nearly dependent spectra in fluorescent spectroscopy. This has important implications --- in the case of antenna array processing, it allows for joint localization and extraction of multiple sources from the measurement of a noisy mixture recorded on multiple sensors in an entirely deterministic manner. In the case of CDMA, it allows the possibility of having a number of users larger than the spreading gain. In the case of fluorescent spectroscopy, it allows for detection of nearly identical chemical constituents. The proposed technique involves the solution of a bounded coherence low-rank multilinear approximation problem. We show that bounded coherence allows us to establish existence and uniqueness of the recovered solution. We will provide some statistical motivation for the approximation problem and discuss greedy approximation bounds. To provide the theoretical underpinnings for this technique, we develop a corresponding theory of sparse separable decompositions of functions, including notions of rank and nuclear norm that specialize to the usual ones for matrices and operators but apply to also hypermatrices and tensors.Comment: 20 pages, to appear in IEEE Transactions on Information Theor

    Exploring multimodal data fusion through joint decompositions with flexible couplings

    Full text link
    A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.Comment: 15 pages, 7 figures, revised versio

    Tensors: a Brief Introduction

    No full text
    International audienceTensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief survey

    Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms

    Full text link
    We propose a gradient-based Jacobi algorithm for a class of maximization problems on the unitary group, with a focus on approximate diagonalization of complex matrices and tensors by unitary transformations. We provide weak convergence results, and prove local linear convergence of this algorithm.The convergence results also apply to the case of real-valued tensors

    Hankel low-rank matrix completion: performance of the nuclear norm relaxation

    No full text
    Accepted version.International audienceThe completion of matrices with missing values under the rank constraint is a non-convex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is whether the two optimization problems lead to the same solution. This question was addressed in the literature mostly in the case of random positions of missing elements and random known elements. In this contribution, we analyze the case of structured matrices with a fixed pattern of missing values, namely, the case of Hankel matrix completion. We extend existing results on completion of rank-one real Hankel matrices to completion of rank-r complex Hankel matrices.La complétion de données manquantes dans des matrices structurées sous contrainte de rang est un problème d'optimisation non convexe. Une relaxation convexe a été récemment proposée et est basée sur la minimisation de la norme nucléaire (somme des valeurs singulières). Il reste à prouver que ces deux problèmes d'optimisation conduisent bien à la même solution. Dans cette contribution, nous étendons les résultats existants pour des matrices Hankel réelles particulières à des matrices Hankel générales complexes
    corecore