IDEA-Research / Lite-DETR

[CVPR 2023] Official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR"
Apache License 2.0
179 stars 13 forks source link

deformable attention block limited by topk=num_queries #14

Open SyedShaQutub opened 4 months ago

SyedShaQutub commented 4 months ago

In the deformable transformer block code in /Lite-DETR/models/dino/deformable_transformer.py, I had an issue while running batch inferences on CoCo eval data.

topk_proposals = torch.topk(enc_outputs_class_unselected.max(-1)[0], topk, dim=1)[1] # bs, nq

In my experiments, I added perturbations to the model to disrupt the predictions and a few times, I got a number of queries from the encoder outputs less than the num_queries. I would be happy to create a pull request on this as I have a simple fix to it. If not I can paste it down here and you could take it further. Let me know. Thanks

FengLi-ust commented 1 month ago

Hi, thanks for this information. You are welcome to open a PR!