NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 372 forks source link

Would you mind tell me the WER of librispeech datasets? thanks #468

Closed iamxiaoyubei closed 5 years ago

iamxiaoyubei commented 5 years ago

Would you mind tell me the WER of librispeech datasets(dev-clean/dev-other/test-clean/test-other/or both of them?) for ASR task? Thank you~

borisgin commented 5 years ago

Jasper-Large has 3.61 % greedy on test clean dataset. For more details see our arxiv paper. For other models:ASR WER

iamxiaoyubei commented 5 years ago

Your results are really good!

Are these results obtained using only the librispeech dataset for training? If not, could you tell me which datasets are used in total or how long are they? Thank you!

borisgin commented 5 years ago

Only Librispeech.