Open wingvortex opened 9 months ago
Hi,
I tested an onnx model converted (by export_onnx.py) from a torch model (trained with config rtdetr_r50vd_6x). The onnx model is always much slower than the torch model, no matter the CPU, GPU, or batch process. Why does this happen?
It may relate to the ONNX inference engine, that is not as good as torch (e.g some op).
Hi,
I tested an onnx model converted (by export_onnx.py) from a torch model (trained with config rtdetr_r50vd_6x). The onnx model is always much slower than the torch model, no matter the CPU, GPU, or batch process. Why does this happen?