NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 372 forks source link

Deep speech 2 training time #541

Open anhtuanluu opened 4 years ago

anhtuanluu commented 4 years ago

I used my data abt 1000h speech and trained on 8x Tesla V100-16GB using horovod and mixed precision. With batch_size_per_gpu equals to 32, the training time is abt 3.2s per step, it took 3.5h for 1 epoch. Is it expected? Can I reduce the training time?