3,550 research outputs found

    Deep learning for extracting protein-protein interactions from biomedical literature

    Full text link
    State-of-the-art methods for protein-protein interaction (PPI) extraction are primarily feature-based or kernel-based by leveraging lexical and syntactic information. But how to incorporate such knowledge in the recent deep learning methods remains an open question. In this paper, we propose a multichannel dependency-based convolutional neural network model (McDepCNN). It applies one channel to the embedding vector of each word in the sentence, and another channel to the embedding vector of the head of the corresponding word. Therefore, the model can use richer information obtained from different channels. Experiments on two public benchmarking datasets, AIMed and BioInfer, demonstrate that McDepCNN compares favorably to the state-of-the-art rich-feature and single-kernel based methods. In addition, McDepCNN achieves 24.4% relative improvement in F1-score over the state-of-the-art methods on cross-corpus evaluation and 12% improvement in F1-score over kernel-based methods on "difficult" instances. These results suggest that McDepCNN generalizes more easily over different corpora, and is capable of capturing long distance features in the sentences.Comment: Accepted for publication in Proceedings of the 2017 Workshop on Biomedical Natural Language Processing, 10 pages, 2 figures, 6 table

    Personalized neural language models for real-world query auto completion

    Full text link
    Query auto completion (QAC) systems are a standard part of search engines in industry, helping users formulate their query. Such systems update their suggestions after the user types each character, predicting the user's intent using various signals - one of the most common being popularity. Recently, deep learning approaches have been proposed for the QAC task, to specifically address the main limitation of previous popularity-based methods: the inability to predict unseen queries. In this work we improve previous methods based on neural language modeling, with the goal of building an end-to-end system. We particularly focus on using real-world data by integrating user information for personalized suggestions when possible. We also make use of time information and study how to increase diversity in the suggestions while studying the impact on scalability. Our empirical results demonstrate a marked improvement on two separate datasets over previous best methods in both accuracy and scalability, making a step towards neural query auto-completion in production search engines.Comment: To appear in NAACL-HLT 201

    ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases

    Full text link
    The chest X-ray is one of the most commonly accessible radiological examinations for screening and diagnosis of many lung diseases. A tremendous number of X-ray imaging studies accompanied by radiological reports are accumulated and stored in many modern hospitals' Picture Archiving and Communication Systems (PACS). On the other side, it is still an open question how this type of hospital-size knowledge database containing invaluable imaging informatics (i.e., loosely labeled) can be used to facilitate the data-hungry deep learning paradigms in building truly large-scale high precision computer-aided diagnosis (CAD) systems. In this paper, we present a new chest X-ray database, namely "ChestX-ray8", which comprises 108,948 frontal-view X-ray images of 32,717 unique patients with the text-mined eight disease image labels (where each image can have multi-labels), from the associated radiological reports using natural language processing. Importantly, we demonstrate that these commonly occurring thoracic diseases can be detected and even spatially-located via a unified weakly-supervised multi-label image classification and disease localization framework, which is validated using our proposed dataset. Although the initial quantitative results are promising as reported, deep convolutional neural network based "reading chest X-rays" (i.e., recognizing and locating the common disease patterns trained with only image-level labels) remains a strenuous task for fully-automated high precision CAD systems. Data download link: https://nihcc.app.box.com/v/ChestXray-NIHCCComment: CVPR 2017 spotlight;V1: CVPR submission+supplementary; V2: Statistics and benchmark results on published ChestX-ray14 dataset are updated in Appendix B V3: Minor correction V4: new data download link upated: https://nihcc.app.box.com/v/ChestXray-NIHCC V5: Update benchmark results on the published data split in the appendi

    On the distribution of Jacobi sums

    Full text link
    Let Fq\mathbf{F}_q be a finite field of qq elements. For multiplicative characters χ1,,χm\chi_1,\dots, \chi_m of Fq×\mathbf{F}_q^\times, we let J(χ1,,χm)J(\chi_1,\dots, \chi_m) denote the Jacobi sum. Nicholas Katz and Zhiyong Zheng showed that for m=2m=2, the normalized Jacobi sum q1/2J(χ1,χ2)q^{-1/2}J(\chi_1,\chi_2) (χ1χ2\chi_1\chi_2 nontrivial) is asymptotically equidistributed on the unit circle as qq\to \infty, when χ1\chi_1 and χ2\chi_2 run through all nontrivial multiplicative characters of Fq×\mathbf{F}_q^\times. In this paper, we show a similar property for m2m\ge 2. More generally, we show that the normalized Jacobi sum q(m1)/2J(χ1,,χm)q^{-(m-1)/2}J(\chi_1,\dots,\chi_m) (χ1χm\chi_1\dotsm \chi_m nontrivial) is asymptotically equidistributed on the unit circle, when χ1,,χm\chi_1,\dots, \chi_m run through arbitrary sets of nontrivial multiplicative characters of Fq×\mathbf{F}_q^\times with two of the sets being sufficiently large. The case m=2m=2 answers a question of Shparlinski.Comment: 18 pages. v3: fixed some typos; v2: improved some bound
    corecore