IDEA-Research / Lite-DETR

[CVPR 2023] Official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR"
Apache License 2.0
182 stars 14 forks source link

Could u provide FPS? #1

Closed TsingWei closed 1 year ago

TsingWei commented 1 year ago

Just wonder why there is no FPS compared with other methods in the paper.

FengLi-ust commented 1 year ago

We did not optimize the inference speed, especially the KDA attention that needs to be implemented with CUDA implementation (currently in PyTorch) for fast inference speed.

TsingWei commented 1 year ago

Is KDA a plug-and-play alternative to Deformable Attention?

FengLi-ust commented 1 year ago

Yes. You can safely use the original deformable attention, and the performance will not be influenced by a large margin.