bellymonster / Weighted-Soft-Label-Distillation

55 stars 8 forks source link

 The pretrained teacher and hyper-parameters on CIFAR-100 #2

Closed VelsLiu closed 3 years ago

VelsLiu commented 3 years ago

Hi, thanks for the interesting work. I am trying to reproduce the results on CIFAR-100, but failed. I have some questions about the implementation on CIFAR-100. I will appreciate it if you can provide some suggestions. Specifically, is the training loss implementation on CIFAR-100 the same as that on ImageNet, except $\alpha$ is set to 2.25 and T is set to 4? are the pretrained and fixed teachers that are used in the experiments the same as those in CRD? Thank you in advance!

woshichase commented 3 years ago

Thanks for your attention. To keep consistency with ImageNet experiments, Cifar-100 experiments are also run on Overhaul repo(https://github.com/clovaai/overhaul-distillation). As described in our paper, set the training settings the same with CRD. The loss implementation is the same as that on ImageNet. Set alpha to 2.25 and T to 4 as described Sec 5. The pretrained teachers are re-trained on Overhaul using the same training settings as CRD. Note results are averaged over 5 runs for Cifar-100.

VelsLiu commented 3 years ago

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

summertaiyuan commented 3 years ago

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

VelsLiu commented 3 years ago

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

summertaiyuan commented 3 years ago

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

VelsLiu commented 3 years ago

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

Yeah, the main idea of the method is the weight. Previously I was just curious about how CE+KL loss with an adaptive weight could achieve such a good performance. The author said they retrained the teacher. Probably the results can only be reproduced with their pretrained teachers. So just move on.

woshichase commented 3 years ago

@summertaiyuan @VelsLiu 1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster. log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'. Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate. [1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

summertaiyuan commented 3 years ago

@summertaiyuan @VelsLiu 1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster. log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'. Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate. [1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

I sincerely apologize to you, I reproduce your results tonight.

withdraw the apology