Alibaba-MIIL / ImageNet21K

Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
MIT License
732 stars 71 forks source link

Val Result #13

Closed wtt0213 closed 3 years ago

wtt0213 commented 3 years ago

Dear author : When I test resnet50 model you provide on imagenet21k-p val dataset, the ImageNet-21K-P semantic top-1 Accuracy is just 69%, but what you have claimed is 75.6%, but I indeed see the improvement on downstream task, what's the problem?

mrT23 commented 3 years ago

what matters in the end are the downstream results.

anyway, on which variant have you tested the semantic accuracy, fall-11 or winter-21 ? the article's semantic accuracy is reported on fall-11 variant

wtt0213 commented 3 years ago

winter-21, and i train the model on it, is the dataset matter? By training models on winter-21 dataset, i got the model that its semantic top-1 Accuracy is 72%, higher than 69%(repo provided), but by using my own model, the result on downstream task(cifar100) is worse than yours.

mrT23 commented 3 years ago

read the appendix. winter-21 has different classes. don't test a model that was trained with fall-11 on winter-21 validation set.

anyway, training in the article are with KD, yours probably not. that's why downstream results are worse. i am planning to add KD training code in the future.

wtt0213 commented 3 years ago

thanks~ can you provide the file: ./resources/winter21_imagenet21k_miil_tree.pth

mrT23 commented 3 years ago

https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ImageNet_21K_P/resources/winter21_imagenet21k_miil_tree.pth

wtt0213 commented 3 years ago

thanks!!!!!! Have you run semantic multi label experiment on wniter dataset? Whats's the semantic top-1 Accuracy using R50 model?

mrT23 commented 3 years ago

i didn't test resnet50 on winter-21 split. for the model i tested (TResNet-M), semantic accuracy was 1-2% higher on winter21, compared to fall11.

my advice for you:

wtt0213 commented 3 years ago

i didn't test resnet50 on winter-21 split. for the model i tested (TResNet-M), semantic accuracy was 1-2% higher on winter21, compared to fall11.

my advice for you:

  • take resnet50 from fall11 pretraining
  • initialize your winter21 training from it, and finetune for 5-10 epochs this would be your baseline. try to reproduce, or come close to it, from regular 1k initialization

Thank you very much, I will try it ~ Since I do not have fall11 version dataset, so maybe I will train it from imagenet1k on winter-21, or can you provide fall11 dataset(processed) version? I want to make sure my code is OK

mrT23 commented 3 years ago

to comply with imagenet21k usage license, i can't share direct link to the processed dataset, only the processing script. sorry

hongge831 commented 2 years ago

@wtt0213 Hi Wang, I just saw your dialog here, I just wonder how can you use the resnet-50 from fall test on winter and got 69% result? I just use the pre-trained model and got almost none accuracy? Can you provide me with your resnet-50 model on the winter dataset? Thanks so much !