18,549 research outputs found

    The structure of ordinary: Hui vernacular settlements and architecture in China

    Get PDF
    Ponència presentada a: Session 8: Dimensiones psicosociales de la arquitectura y el urbanismo / Psycological dimensions of architecture and plannin

    Linearized Alternating Direction Method with Adaptive Penalty and Warm Starts for Fast Solving Transform Invariant Low-Rank Textures

    Full text link
    Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method.Comment: Accepted by International Journal of Computer Vision (IJCV

    Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction

    Full text link
    In recent years there is a surge of interest in applying distant supervision (DS) to automatically generate training data for relation extraction (RE). In this paper, we study the problem what limits the performance of DS-trained neural models, conduct thorough analyses, and identify a factor that can influence the performance greatly, shifted label distribution. Specifically, we found this problem commonly exists in real-world DS datasets, and without special handing, typical DS-RE models cannot automatically adapt to this shift, thus achieving deteriorated performance. To further validate our intuition, we develop a simple yet effective adaptation method for DS-trained models, bias adjustment, which updates models learned over the source domain (i.e., DS training set) with a label distribution estimated on the target domain (i.e., test set). Experiments demonstrate that bias adjustment achieves consistent performance gains on DS-trained models, especially on neural models, with an up to 23% relative F1 improvement, which verifies our assumptions. Our code and data can be found at \url{https://github.com/INK-USC/shifted-label-distribution}.Comment: 13 pages: 10 pages paper, 3 pages appendix. Appears at EMNLP 201

    A Note on the Maximum Genus of Graphs with Diameter 4

    Get PDF
    Let G be a simple graph with diameter four, if G does not contain complete subgraph K3 of order three
    corecore