37 research outputs found
Gated-Attention Architectures for Task-Oriented Language Grounding
To perform tasks specified by natural language instructions, autonomous
agents need to extract semantically meaningful representations of language and
map it to visual elements and actions in the environment. This problem is
called task-oriented language grounding. We propose an end-to-end trainable
neural architecture for task-oriented language grounding in 3D environments
which assumes no prior linguistic or perceptual knowledge and requires only raw
pixels from the environment and the natural language instruction as input. The
proposed model combines the image and text representations using a
Gated-Attention mechanism and learns a policy to execute the natural language
instruction using standard reinforcement and imitation learning methods. We
show the effectiveness of the proposed model on unseen instructions as well as
unseen maps, both quantitatively and qualitatively. We also introduce a novel
environment based on a 3D game engine to simulate the challenges of
task-oriented language grounding over a rich set of instructions and
environment states.Comment: To appear in AAAI-1
Knowledge-based Word Sense Disambiguation using Topic Models
Word Sense Disambiguation is an open problem in Natural Language Processing
which is particularly challenging and useful in the unsupervised setting where
all the words in any given text need to be disambiguated without using any
labeled data. Typically WSD systems use the sentence or a small window of words
around the target word as the context for disambiguation because their
computational complexity scales exponentially with the size of the context. In
this paper, we leverage the formalism of topic model to design a WSD system
that scales linearly with the number of words in the context. As a result, our
system is able to utilize the whole document as the context for a word to be
disambiguated. The proposed method is a variant of Latent Dirichlet Allocation
in which the topic proportions for a document are replaced by synset
proportions. We further utilize the information in the WordNet by assigning a
non-uniform prior to synset distribution over words and a logistic-normal prior
for document distribution over synsets. We evaluate the proposed method on
Senseval-2, Senseval-3, SemEval-2007, SemEval-2013 and SemEval-2015 English
All-Word WSD datasets and show that it outperforms the state-of-the-art
unsupervised knowledge-based WSD system by a significant margin.Comment: To appear in AAAI-1
