746 research outputs found

    On Flattenability of Graphs

    Full text link
    We consider a generalization of the concept of dd-flattenability of graphs - introduced for the l2l_2 norm by Belk and Connelly - to general lpl_p norms, with integer PP, 1p<1 \le p < \infty, though many of our results work for ll_\infty as well. The following results are shown for graphs GG, using notions of genericity, rigidity, and generic dd-dimensional rigidity matroid introduced by Kitson for frameworks in general lpl_p norms, as well as the cones of vectors of pairwise lppl_p^p distances of a finite point configuration in dd-dimensional, lpl_p space: (i) dd-flattenability of a graph GG is equivalent to the convexity of dd-dimensional, inherent Cayley configurations spaces for GG, a concept introduced by the first author; (ii) dd-flattenability and convexity of Cayley configuration spaces over specified non-edges of a dd-dimensional framework are not generic properties of frameworks (in arbitrary dimension); (iii) dd-flattenability of GG is equivalent to all of GG's generic frameworks being dd-flattenable; (iv) existence of one generic dd-flattenable framework for GG is equivalent to the independence of the edges of GG, a generic property of frameworks; (v) the rank of GG equals the dimension of the projection of the dd-dimensional stratum of the lppl_p^p distance cone. We give stronger results for specific norms for d=2d = 2: we show that (vi) 2-flattenable graphs for the l1l_1-norm (and ll_\infty-norm) are a larger class than 2-flattenable graphs for Euclidean l2l_2-norm case and finally (vii) prove further results towards characterizing 2-flattenability in the l1l_1-norm. A number of conjectures and open problems are posed

    Nucleation-free 3D3D rigidity

    Get PDF
    When all non-edge distances of a graph realized in Rd\mathbb{R}^{d} as a {\em bar-and-joint framework} are generically {\em implied} by the bar (edge) lengths, the graph is said to be {\em rigid} in Rd\mathbb{R}^{d}. For d=3d=3, characterizing rigid graphs, determining implied non-edges and {\em dependent} edge sets remains an elusive, long-standing open problem. One obstacle is to determine when implied non-edges can exist without non-trivial rigid induced subgraphs, i.e., {\em nucleations}, and how to deal with them. In this paper, we give general inductive construction schemes and proof techniques to generate {\em nucleation-free graphs} (i.e., graphs without any nucleation) with implied non-edges. As a consequence, we obtain (a) dependent graphs in 3D3D that have no nucleation; and (b) 3D3D nucleation-free {\em rigidity circuits}, i.e., minimally dependent edge sets in d=3d=3. It additionally follows that true rigidity is strictly stronger than a tractable approximation to rigidity given by Sitharam and Zhou \cite{sitharam:zhou:tractableADG:2004}, based on an inductive combinatorial characterization. As an independently interesting byproduct, we obtain a new inductive construction for independent graphs in 3D3D. Currently, very few such inductive constructions are known, in contrast to 2D2D

    An Incidence Geometry approach to Dictionary Learning

    Full text link
    We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a sparse representation of data points, by learning \emph{dictionary vectors} upon which the data points can be written as sparse linear combinations. We view this problem from a geometry perspective as the spanning set of a subspace arrangement, and focus on understanding the case when the underlying hypergraph of the subspace arrangement is specified. For this Fitted Dictionary Learning problem, we completely characterize the combinatorics of the associated subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a combinatorial rigidity-type theorem is proven for a type of geometric incidence system. The theorem characterizes the hypergraphs of subspace arrangements that generically yield (a) at least one dictionary (b) a locally unique dictionary (i.e.\ at most a finite number of isolated dictionaries) of the specified size. We are unaware of prior application of combinatorial rigidity techniques in the setting of Dictionary Learning, or even in machine learning. We also provide a systematic classification of problems related to Dictionary Learning together with various algorithms, their assumptions and performance
    corecore