NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 369 forks source link

Pre-trained transformer models #331

Closed sundeepteki closed 5 years ago

sundeepteki commented 5 years ago

Hi,

Do you've an ETA for releasing Transformer pre-trained models for the following models -

"It is very good for neural machine translation tasks and base configuration achieves SacreBLEU of 26.4 on WMT 2014 English-to-German translation task ( checkpoint TBD ) while big model gets around 27.5."

vsl9 commented 5 years ago

Hi,

Thank you for catching this TBD in the documentation! You can find the checkpoints on the main machine translation page: https://nvidia.github.io/OpenSeq2Seq/html/machine-translation.html

sundeepteki commented 5 years ago

Thanks for sharing!