rahulvigneswaran / Class-Balanced-Distillation-for-Long-Tailed-Visual-Recognition.pytorch

Un-offical PyTorch Implementation of "Class-Balanced Distillation for Long-Tailed Visual Recognition" paper.
16 stars 4 forks source link

Cannot get the result of ImageNet_LT dataset as you published #3

Closed zuglerQ closed 2 years ago

zuglerQ commented 2 years ago

Hi, Thanks for sharing the code. Following your instructions, I ran the ImageNet_LT with ResNet-50. However, the result was even lower than the base teacher model (I did not change the code). The parameters are : seed = 1, alpha=0.4 and beta = 100, same as the paper setting. The teacher models are normal teachers 10 and 20; augmentation teachers 20, 30. The validation result is shown as below:

**Phase: val

Evaluation_accuracy_micro_top1: 0.458 Averaged F-measure: 0.438 Many_shot_accuracy_top1: 0.578 Median_shot_accuracy_top1: 0.427 Low_shot_accuracy_top1: 0.225

Best validation accuracy is 0.46725 at epoch 85**

I knew that 55.6 is the result in test set, but the validation result should be higher than test result normally. Could you give any clue about this performance?

Looking forward to your reply.

rahulvigneswaran commented 2 years ago

Hi @zuglerQ ,

Note that this is an unofficial implementation of the paper (am not one of the authors). So I have not matched accuracies with the paper. So try and do the inference on the test set to find the actual test accuracy which may not be 55.6. Let me know what you get.

Also, do you mean that the validation accuracy of your student model is lower than the validation accuracy the base teacher models when using 4 teachers?

ahmetius commented 2 years ago

Hi,

We will release the official implementation in the next 1-2 months.

Cheers, Ahmet

zuglerQ commented 2 years ago

Hi, Thank you all. @rahulvigneswaran Thanks for your reply. That is my fault for regarding this as an official one. I will test it on the test set and once I have some improvement, I will inform you.

Thanks a lot for your time and effort.

rahulvigneswaran commented 2 years ago

@ahmetius Thanks a lot for the response.

@zuglerQ Am closing this issue for now. Feel free to reopen it if you have any other doubts.

ShirleyHe2020 commented 2 years ago

Hi, Thanks for sharing the code. Following your instructions, I ran the ImageNet_LT with ResNet-50. However, the result was even lower than the base teacher model (I did not change the code). The parameters are : seed = 1, alpha=0.4 and beta = 100, same as the paper setting. The teacher models are normal teachers 10 and 20; augmentation teachers 20, 30. The validation result is shown as below:

**Phase: val

Evaluation_accuracy_micro_top1: 0.458 Averaged F-measure: 0.438 Many_shot_accuracy_top1: 0.578 Median_shot_accuracy_top1: 0.427 Low_shot_accuracy_top1: 0.225

Best validation accuracy is 0.46725 at epoch 85**

I knew that 55.6 is the result in test set, but the validation result should be higher than test result normally. Could you give any clue about this performance?

Looking forward to your reply.

Hi , I tried to download dataset from https://liuziwei7.github.io/projects/LongTail.html and changed data_root in main.py, when tried run .sh . got error: FileNotFoundError: [Errno 2] No such file or directory: '/home/qian/anaconda3/envs/Class-Balanced-Distillation-for-Long-Tailed-Visual-Recognition.pytorch-main/libs/data/ImageNet_LT_open/train/n01744401/n01744401_3784.JPEG'

May I know where did you download dataset?

rahulvigneswaran commented 2 years ago

@ShirleyHe2020 Follow #4

ahmetius commented 2 years ago

Hello all,

We just released the official implementation here:

https://github.com/google-research/google-research/tree/master/class_balanced_distillation