rizar / attention-lvcsr

End-to-End Attention-Based Large Vocabulary Speech Recognition
MIT License
262 stars 99 forks source link

Attention-based Speech Recognizer

The reference implementation for the papers

End-to-End Attention-based Large Vocabulary Speech Recognition. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio (arxiv draft, ICASSP 2016)

and

Task Loss Estimation for Sequence Prediction. Dzmitry Bahdanau, Dmitriy Serdyuk, Philémon Brakel, Nan Rosemary Ke, Jan Chorowski, Aaron Courville, Yoshua Bengio (arxiv draft, submitted to ICLR 2016).

This code is no longer maintained

This codebase is based on outdated techonologies (Theano, Blocks, etc) and is no longer maintained. We recommend you to look for more modern speech recognition implemenations (see e.g. https://github.com/Alexander-H-Liu/End-to-end-ASR-Pytorch).

How to use

Then, please proceed to exp/wsj for the instructions how to replicate our results on Wall Street Journal (WSJ) dataset (available at the Linguistic Data Consortium as LDC93S6B and LDC94S13B).

Dependencies

Given that you have the dataset in HDF5 format, the models can be trained without Kaldi and PyFst.

Installation

Subtrees

The repository contains custom modified versions of Theano, Blocks, Fuel, picklable-itertools, Blocks-extras as [subtrees] (http://blogs.atlassian.com/2013/05/alternatives-to-git-submodule-git-subtree/). In order to ensure that these specific versions are used, we recommend to uninstall regular installations of these packages if you have them installed in addition to sourcing env.sh.

License

MIT