1,140 research outputs found
Unsupervised Ranking Model for Entity Coreference Resolution
Coreference resolution is one of the first stages in deep language
understanding and its importance has been well recognized in the natural
language processing community. In this paper, we propose a generative,
unsupervised ranking model for entity coreference resolution by introducing
resolution mode variables. Our unsupervised system achieves 58.44% F1 score of
the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan
et al., 2012), outperforming the Stanford deterministic system (Lee et al.,
2013) by 3.01%.Comment: Accepted by NAACL 201
Enriching very large ontologies using the WWW
This paper explores the possibility to exploit text on the world wide web in
order to enrich the concepts in existing ontologies. First, a method to
retrieve documents from the WWW related to a concept is described. These
document collections are used 1) to construct topic signatures (lists of
topically related words) for each concept in WordNet, and 2) to build
hierarchical clusters of the concepts (the word senses) that lexicalize a given
word. The overall goal is to overcome two shortcomings of WordNet: the lack of
topical links among concepts, and the proliferation of senses. Topic signatures
are validated on a word sense disambiguation task with good results, which are
improved when the hierarchical clusters are used.Comment: 6 page
Ontology-Aware Token Embeddings for Prepositional Phrase Attachment
Type-level word embeddings use the same set of parameters to represent all
instances of a word regardless of its context, ignoring the inherent lexical
ambiguity in language. Instead, we embed semantic concepts (or synsets) as
defined in WordNet and represent a word token in a particular context by
estimating a distribution over relevant semantic concepts. We use the new,
context-sensitive embeddings in a model for predicting prepositional phrase(PP)
attachments and jointly learn the concept embeddings and model parameters. We
show that using context-sensitive embeddings improves the accuracy of the PP
attachment model by 5.4% absolute points, which amounts to a 34.4% relative
reduction in errors.Comment: ACL 201
- …
