In the deformable transformer block code in /Lite-DETR/models/dino/deformable_transformer.py, I had an issue while running batch inferences on CoCo eval data.
In my experiments, I added perturbations to the model to disrupt the predictions and a few times, I got a number of queries from the encoder outputs less than the num_queries. I would be happy to create a pull request on this as I have a simple fix to it. If not I can paste it down here and you could take it further. Let me know.
Thanks
In the deformable transformer block code in /Lite-DETR/models/dino/deformable_transformer.py, I had an issue while running batch inferences on CoCo eval data.
topk_proposals = torch.topk(enc_outputs_class_unselected.max(-1)[0], topk, dim=1)[1] # bs, nq
In my experiments, I added perturbations to the model to disrupt the predictions and a few times, I got a number of queries from the encoder outputs less than the num_queries. I would be happy to create a pull request on this as I have a simple fix to it. If not I can paste it down here and you could take it further. Let me know. Thanks