liming-ai / AlignDet

Official code for ICCV 2023 Paper: AlignDet: Aligning Pre-training and Fine-tuning in Object Detection.
https://liming-ai.github.io/AlignDet/
Apache License 2.0
141 stars 13 forks source link

How to understand the experiments in Table2? #21

Closed pILLOW-1 closed 12 months ago

pILLOW-1 commented 12 months ago

A great work. I have a little problem understanding the experiments in Table2. My understanding is as follows:

  1. For those unaligned pre-training methods, only use 1%, 5%, ..., 100% data chosen from COCO train2017 to finetune them. Is that correct?
  2. For those aligned pre-training methods, first pre-train(box-domain) them on the whole COCO train2017, then use 1%, 5%, ..., 100% data from COCO train2017 to finetune them. Is that correct?
  3. In addition to the above two questions, I notice that in section4.2, there is a statement: "We provide 5 different data folds for each low-data setting, and the final performance is the average of all results". How to understand the number '5' and 'low-data' here? A single result in Table2 is the average of 5 results??? I am confused about the logics here.

Appreciation for answering my questions!

liming-ai commented 12 months ago

Hi @pILLOW-1,

Thanks for your questions.

  1. Yes. The 1%, 5%, ..., and 100% denote we choose the part of the data with accurate annotations to validate the effectiveness under the low-data settings.
  2. Yes. Both the aligned methods and unaligned methods are the same settings.
  3. The 'low data' here means the 1%, 5%, ..., 50% of whole COCO data. To avoid random error, for these subdatasets we run 5 experiments with different subset each, and then report the averaged results. All the results in Table 2 (except 100% data column) are calculated 5 times and calculate the average value.

Hope these explanations could help you.

pILLOW-1 commented 12 months ago

Got it. Thanks!