Closed bibekyess closed 3 months ago
Sorry for the late reply. I have never run a detector on the CPU. I think a possible solution is to convert the Co-DINO weights to the DINO format and export this DINO model to openvino via mmdeploy.
Ok, sounds like a good direction to try. Thank you for your response!
Interesting, the fast I could get is around 1fps for co_dino_5scale_swin_l
in openvino. Anyway, it is better than onnx by half.
I found that both DINO and CoDINO can be converted to openvino but we need to add custom extension that maps mmdeploy.grid_sampler
to existing operation from OpenVINO opset: GridSample
.
Thanks!
Hello!
Thanks for your awesome work. I am using this for my custom use and accuracy wise, it is very satisfactory. But, in terms of speed, it is very slow. In CPU, Co-Dino with SwinL backbone takes around 2s. Using
mmdet
package, I can convert it to onnx and tensorrt. But onnx model also didn't result in speed improvement. So, I want to ask for the author's recommended approach for running Co-Detr faster on CPUs? Is it possible to export it to openvino? Currentmmdet
version doesn't support export to openvino. If there are any suggestions, I would highly appreciate it.Thank you!