Closed w11wo closed 10 months ago
Good PR, very helpful.
Solved merge conflict @p0p4k
Any metric for CPU vs GPU vs ONNX infer time? Thanks a lot!
I've conducted tests and measured the different inference speeds between CPU, GPU, ONNX CPU, and ONNX GPU. The results are as follows:
Model: Pytorch CPU: 2.3481 seconds Model: Pytorch GPU: 0.1600 seconds Model: ONNX CPU: 2.1909 seconds Model: ONNX GPU: 0.1367 seconds
The ONNX model is just slightly faster than its PyTorch counterpart. You can find the Colab notebook here.
@p0p4k
Tested ONNX Export and Inference with LJSpeech-no-sdp. Commands are as follows: