NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 371 forks source link

Training details of jasper10x5_LibriSpeech_nvgrad #415

Closed GabrielLin closed 5 years ago

GabrielLin commented 5 years ago

Could you please show me more details of jasper10x5_LibriSpeech_nvgrad pre-train model? Such as hardware, training time, whether to use external data or not, whether to use synthetic data or not. Thanks.

okuchaiev commented 5 years ago

@blisc could you please comment on this

blisc commented 5 years ago

The pre-trained model can be achieved as is with the jasper10x5_LibriSpeech_nvgrad.py example config.

In additional, compared to older Jasper models:

GabrielLin commented 5 years ago

Thank you for your valuable information. @blisc