9,516 research outputs found

    A Note on Systems of Linear Equations

    Get PDF
    This note is a comment on reference [1] and a generalization of the method there presented. We consider a system of m linear equations in n unknowns x_1, x_2,...x_n (1) Σ^(n)_(j=1) a_(ij)x_j = c_i i=1, 2,...m, a_(ij), c_i real or A∙x=c in matrix notation. We distinguish three cases: (I) There is no finite vector x satisfying (1) (inconsistent case); (II) There is a unique vector x satisfying (1); (III) There are an infinity of vectors x satisfying (1), such that their endpoints lie on some line, plane, or higher-dimensional linear manifold

    Sampling Limits for Electron Tomography with Sparsity-exploiting Reconstructions

    Full text link
    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of l_1-norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements - analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of +-75 or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed

    Comparative Analysis of the Major Polypeptides from Liver Gap Junctions and Lens Fiber Junctions

    Get PDF
    Gap junctions from rat liver and fiber junctions from bovine lens have similar septilaminar profiles when examined by thin-section electron microscopy and differ only slightly with respect to the packing of intramembrane particles in freeze-fracture images. These similarities have often led to lens fiber junctions being referred to as gap junctions. Junctions from both sources were isolated as enriched subcellular fractions and their major polypeptide components compared biochemically and immunochemically. The major liver gap junction polypeptide has an apparent molecular weight of 27,000, while a 25,000-dalton polypeptide is the major component of lens fiber junctions. The two polypeptides are not homologous when compared by partial peptide mapping in SDS. In addition, there is not detectable antigenic similarity between the two polypeptides by immunochemical criteria using antibodies to the 25,000-dalton lens fiber junction polypeptide. Thus, in spite of the ultrastructural similarities, the gap junction and the lens fiber junction are comprised of distinctly different polypeptides, suggesting that the lens fiber junction contains a unique gene product and potentially different physiological properties

    Reversible Architectures for Arbitrarily Deep Residual Neural Networks

    Full text link
    Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.Comment: Accepted at AAAI 201

    Abnormal flowers and pattern formation in floral

    Get PDF
    “From our acquaintance with this abnormal enabled to unveil the secrets that normal us, and to see distinctly what, from the regular we can only infer.” - J. W. von Goethe (1790
    corecore