IDEA-Research / DN-DETR

[CVPR 2022 Oral] Official implementation of DN-DETR
Apache License 2.0
528 stars 57 forks source link

adding DINO's contrastive sampling #45

Open Vallum opened 1 year ago

Vallum commented 1 year ago

I implemented DINO's contrastive sampling idea upon DN-DETR repo.

It has a little bit of diffrence from DINO's official implementation. https://github.com/IDEA-Research/DN-DETR/issues/43#issuecomment-1275767564

I built Mix Query Sampling(MQS) and Look Forward Twice(LFT) upon Deformable DETR's two stage option. In addition, for negative sampling, the Official DINO pushed the negative sampling out of dn_components.py, in which case, it was hard for me to use both DN-DETR and DINO simutaneously. So I implemented double sampling inside dn_components.py

In conclusion, with 36 epochs of Resnet-50, it makes MAP 50.8 ~ 51.0. (It was not stabilized with final LR 10e-5)

IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.510
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.689
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.556
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.350
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.542
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.650
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.383
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.657
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.731
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.576
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.774
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885
Vallum commented 1 year ago

@FengLi-ust Oh, it's my bad of narrow point of view to miss the necessary details! I am gonna fix two details you pointed out right away.

FengLi-ust commented 1 year ago

Thanks for your efforts. I am busy preparing for the CVPR these days. I notice that you also made modifications to the DN-Deformable-DETR, how about leaving the original DN-Deformable-DETR unchanged and only adding your reimplemented DINO and DN-Deformable-DETR? I can merge these two models into our model zoo.

Vallum commented 1 year ago

@FengLi-ust That's not a problem. The only reason I modified your DN-Deformable-DETR is that I already merged my code with it in this pull-request in the beginning. When I added another directory, I revoked the original directory, but the commit history remained, which I cannot remove. So, in that case to maintain your original DN-Deformable-DETR unchaged, I guess it is better to begin with commit 1623ec9c272e5425261999e4fc53c0347a98a0f8 again with new branch.

Oh, BTW, I found another problem. With this new directory, because I added new parameters for DINO's case, for CDN, MQS, and LFT, it could make an issue of compatibility with dn_dab_deformable_detr. Let me think how I can make en route for main.py and engine.py working for both dn_dab_deformable_detr and dn_dab_dino_deformable_detr.

FengLi-ust commented 1 year ago

@Vallum Hey, thanks for your efforts, I have added your implemented DINO in the News of ReadMe. May you also provide the running command and checkpoint so others can reproduce your results?

Vallum commented 1 year ago

@FengLi-ust Thank you for updates and sorry about late response. Actually I have been working on what you mentioned from several days ago. The thing is that, from the code merging steps, I verified mainly on 12 epochs settings after I frozed model codes. So after submitting pull requests with new directory, I didn't have checkpoints 36 epochs settings for DN-DETR-DINO with regards to this new dn_dab_dino_deformable_detr directory yet. (I only tested it with 12epochs setting because it did change only directory location.) Writing extra parameters on ReadMe.md is not a thing, but unfortunately my fresh running of 36e setting is not finished yet. (turned out that I made a few mistakes w.r.t. lr_drop setting to make 36 epochs checkpoints from 12 epochs checkpoint.) As soon as possible, let me inform you updates about running commands and checkpoints. Sorry again about this late development

FengLi-ust commented 1 year ago

Got it. Can you convert your trained checkpoint weights with 51.0 AP so that the new model can load it? As the main framework remains the same, maybe you just need to modify a few weights' names for this new model.

Vallum commented 1 year ago

@FengLi-ust I updated ReadMe.md at https://github.com/Vallum/DN-DETR-DINO/blob/make-new-model/README.md Could you check the difference in "ModelZoo" and "Train your own models"? I am not sure this is the way to finish the layout of documents... Btw, I finished the train of 36 epochs checkpoints, but it turned out to reach only to 50.6-7 ...maybe I missed a little bit of detail and need to check again :( Anyway I uploaded checkpoints including 12epochs_48.8, 36epochs_50.6-7 and additionaly 36epochs_50.8-51.0 (aforementioned from the model before merged to a new directory) on google drive. https://drive.google.com/drive/folders/1AO4aHGghFUpCwQFPSMydU-Hp2tsgFTKi?usp=sharing

FengLi-ust commented 1 year ago

It is OK. You can also post your 50.8-51.0 AP model here.

image