clovaai / wsolevaluation

Evaluating Weakly Supervised Object Localization Methods Right (CVPR 2020)
MIT License
332 stars 55 forks source link

confusion regarding additional datasets #29

Closed umairjavaid closed 4 years ago

umairjavaid commented 4 years ago

I am trying to understand the paper. Just clear out my confusion. I am confused whether the newly added datasets are used for training or not, after the optimal hyperparameters are selected. For example: For different hyperparameters, the model is first trained on the CUB dataset, then validated on the CUBv2 dataset. Hyperparameters of the model with the highest localization accuracy are then selected. A new model with the selected hyperparameters is then trained on the CUB training dataset along with the added CUBv2 dataset, and then finally tested on the CUB test dataset. Am I correct or are you doing something different?

SanghyukChun commented 4 years ago

No. We only use held-out sets for finding the hyperparameters. They are not used after the hyperparameter search. This is the correct procedure proposed in our paper:

  1. Train a number of models with the train-weaksup split (e.g., original CUB train set).
  2. Find the best model from (1) with the train-fullsup split (e.g., additional CUB V2).
  3. The best model chosen from (2) is tested with the test split (e.g., original CUB test set).

You can find the details in Section 5.2. "Comparison of WSOL methods".

(..) we have randomly searched the optimal hyperparam- eters over the train-fullsup with 30 trials (..) The checkpoints that achieves the best localization performance on train-fullsup are used for evaluation.

You can also find the details in our code. https://github.com/clovaai/wsolevaluation/blob/38d4aef1651caf49320abd36fde18540abaf7bfe/main.py#L367-L386