NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 369 forks source link

Time-to-train information for DeepSpeech2 #255

Closed karakusc closed 5 years ago

karakusc commented 5 years ago

Are there time-to-train benchmarks available? Specifically I am interested in the per-epoch wall clock time of DeepSpeech2 (large) model that is provided here: https://nvidia.github.io/OpenSeq2Seq/html/speech-recognition/deepspeech2.html

How long does it take to train this model for 200 epochs with 8 GPUs (V100)?

vsl9 commented 5 years ago

We are not publishing time-to-train benchmarks. Since time-to-train numbers highly depend on numerous hardware and software related factors (GPU, RAM, I/O bandwidth; TensorFlow, Horovod, CUDA, OS, driver versions, etc.), it requires significant efforts to measure and report them in consistent manner. A better venue for such benchmarks might be MLPerf.org.