346 research outputs found

    Linearized Alternating Direction Method with Adaptive Penalty and Warm Starts for Fast Solving Transform Invariant Low-Rank Textures

    Full text link
    Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method.Comment: Accepted by International Journal of Computer Vision (IJCV

    Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis

    Full text link
    Subspace recovery from corrupted and missing data is crucial for various applications in signal processing and information theory. To complete missing values and detect column corruptions, existing robust Matrix Completion (MC) methods mostly concentrate on recovering a low-rank matrix from few corrupted coefficients w.r.t. standard basis, which, however, does not apply to more general basis, e.g., Fourier basis. In this paper, we prove that the range space of an m×nm\times n matrix with rank rr can be exactly recovered from few coefficients w.r.t. general basis, though rr and the number of corrupted samples are both as high as O(min{m,n}/log3(m+n))O(\min\{m,n\}/\log^3 (m+n)). Our model covers previous ones as special cases, and robust MC can recover the intrinsic matrix with a higher rank. Moreover, we suggest a universal choice of the regularization parameter, which is λ=1/logn\lambda=1/\sqrt{\log n}. By our 2,1\ell_{2,1} filtering algorithm, which has theoretical guarantees, we can further reduce the computational cost of our model. As an application, we also find that the solutions to extended robust Low-Rank Representation and to our extended robust MC are mutually expressible, so both our theory and algorithm can be applied to the subspace clustering problem with missing values under certain conditions. Experiments verify our theories.Comment: To appear in IEEE Transactions on Information Theor
    corecore