Closed superhg2012 closed 4 years ago
Hi @superhg2012,
For tacotron2
, there's a Pytorch implementation: https://github.com/NVIDIA/tacotron2 which you might be able to export to ONNX, and try to convert to TensorRT, but I haven't tried it myself.
For tacotron
implemented in TensorFlow, like so: https://github.com/keithito/tacotron
I would think you can use TF-TRT, where TRT-compatible nodes will be sped up and incompatible nodes will fallback to the TF implementation. I did find one post on a user trying this but failing here: https://devtalk.nvidia.com/default/topic/1062601/tensorrt/fail-to-speed-up-model-by-tensorrt-/ but that might be different now.
There are some Bi-LSTM modules in Tacotron model(TTS network) and some convs that are wrapped, does tensorrt support this?