amirbar / DETReg

Official implementation of the CVPR 2022 paper "DETReg: Unsupervised Pretraining with Region Priors for Object Detection".
https://amirbar.net/detreg
Apache License 2.0
334 stars 46 forks source link

Fine-tuning based on the DETR architecture code, but the verification indicators are all 0 #69

Closed Flyooofly closed 1 year ago

Flyooofly commented 1 year ago

Thanks for your work. I noticed that you open-sourced the detreg of the DETR architecture, and then I tried to use the pre-trained model on the imagenet dataset you provided to fine-tune training for my custom dataset. But I found that all the indicators are still 0 after more than fifty batches of pre-training. I have followed the tips in the related issues of DETR (https://github.com/facebookresearch/detr/issues?page=1&q=zero) , the num_calss was modified. Many people mentioned that DETR requires a large amount of training data, or fine-tuning. But I am currently using fine-tuning, and the number of fine-tuning datasets is about one thousand. But the effect is still very poor, may I ask why. It's normal for me to use deformable-detr architecture. image

amirbar commented 1 year ago

Apologies for the delay in response. What is the size of your dataset? For example, ImageNet is 1m images, so 50 epochs might be enough to train over it. If your dataset is significantly slower, you would need to increase the number of epochs proportionally. Also, make sure you don't drop the learning rate early (this should be mostly relevant for Deformable-DETR).