Open ddinhtuann opened 2 years ago
I'm really impressed in inferencing on HuggingFace. I had run Motion Deblur task on my own CPU(i7) and got ~50s inference runtime. May I convert from pth model to onnx and get better runtime.
We have not tried the onnx model. Please share your results with us here in case you run it.
I'm really impressed in inferencing on HuggingFace. I had run Motion Deblur task on my own CPU(i7) and got ~50s inference runtime. May I convert from pth model to onnx and get better runtime.