96 research outputs found
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
We present a recurrent encoder-decoder deep neural network architecture that
directly translates speech in one language into text in another. The model does
not explicitly transcribe the speech into text in the source language, nor does
it require supervision from the ground truth source language transcription
during training. We apply a slightly modified sequence-to-sequence with
attention architecture that has previously been used for speech recognition and
show that it can be repurposed for this more complex task, illustrating the
power of attention-based models. A single model trained end-to-end obtains
state-of-the-art performance on the Fisher Callhome Spanish-English speech
translation task, outperforming a cascade of independently trained
sequence-to-sequence speech recognition and machine translation models by 1.8
BLEU points on the Fisher test set. In addition, we find that making use of the
training data in both languages by multi-task training sequence-to-sequence
speech translation and recognition models with a shared encoder network can
improve performance by a further 1.4 BLEU points.Comment: 5 pages, 1 figure. Interspeech 201
Learning Hard Alignments with Variational Inference
There has recently been significant interest in hard attention models for
tasks such as object recognition, visual captioning and speech recognition.
Hard attention can offer benefits over soft attention such as decreased
computational cost, but training hard attention models can be difficult because
of the discrete latent variables they introduce. Previous work used REINFORCE
and Q-learning to approach these issues, but those methods can provide
high-variance gradient estimates and be slow to train. In this paper, we tackle
the problem of learning hard attention for a sequential task using variational
inference methods, specifically the recently introduced VIMCO and NVIL.
Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We
demonstrate our method on a phoneme recognition task in clean and noisy
environments and show that our method outperforms REINFORCE, with the
difference being greater for a more complicated task
- …
