Hi everyone,
I'm currently learning about Dorado and I see that it uses libtorch for inference. Have you tried using TensorRT? From the documentation, it seems that TensorRT can bring better speed improvements.
Hi @keithyin in Dorado we have custom CUDA layers for our specific needs which are more heavily optimised than can be achieved with frameworks such as TensorRT.
Hi everyone, I'm currently learning about Dorado and I see that it uses libtorch for inference. Have you tried using TensorRT? From the documentation, it seems that TensorRT can bring better speed improvements.