Sense-X / Co-DETR

[ICCV 2023] DETRs with Collaborative Hybrid Assignments Training
MIT License
950 stars 100 forks source link

Running on CPU #130

Closed bibekyess closed 3 months ago

bibekyess commented 4 months ago

Hello!

Thanks for your awesome work. I am using this for my custom use and accuracy wise, it is very satisfactory. But, in terms of speed, it is very slow. In CPU, Co-Dino with SwinL backbone takes around 2s. Using mmdet package, I can convert it to onnx and tensorrt. But onnx model also didn't result in speed improvement. So, I want to ask for the author's recommended approach for running Co-Detr faster on CPUs? Is it possible to export it to openvino? Current mmdet version doesn't support export to openvino. If there are any suggestions, I would highly appreciate it.

Thank you!

TempleX98 commented 4 months ago

Sorry for the late reply. I have never run a detector on the CPU. I think a possible solution is to convert the Co-DINO weights to the DINO format and export this DINO model to openvino via mmdeploy.

bibekyess commented 4 months ago

Ok, sounds like a good direction to try. Thank you for your response!

bibekyess commented 3 months ago

Interesting, the fast I could get is around 1fps for co_dino_5scale_swin_l in openvino. Anyway, it is better than onnx by half. I found that both DINO and CoDINO can be converted to openvino but we need to add custom extension that maps mmdeploy.grid_sampler to existing operation from OpenVINO opset: GridSample. Thanks!