amirbar / DETReg

Official implementation of the CVPR 2022 paper "DETReg: Unsupervised Pretraining with Region Priors for Object Detection".
https://amirbar.net/detreg
Apache License 2.0
336 stars 46 forks source link

Results between IN100 and IN1k setting #36

Closed 4-0-4-notfound closed 2 years ago

4-0-4-notfound commented 2 years ago

In the arXiv v1 version, the fine-tune result on COCO is 45.5 with IN100 pretrain. But in the arXiv v2 version, it seems the fine-tune result on COCO is still 45.5, but the pretrain dataset is IN1k. So, in my understanding, with more pretrain data, but the fine-tune result is not improved?

amirbar commented 2 years ago

With IN100 the result is only slightly worse compared to IN-1k (45.4 AP compared to 45.5 AP). We report both results in the second version (See Table 1 and Table 4).

4-0-4-notfound commented 2 years ago

Thx~ It's actually in Table 6.

How about the VOC and Airbus Ship results? Dose is all pertained on IN100?