HobbitLong / RepDistiller

[ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods
BSD 2-Clause "Simplified" License
2.12k stars 391 forks source link

Cannot achieve the reported accuracy in paper #3

Closed xuguodong03 closed 4 years ago

xuguodong03 commented 4 years ago

Thanks for your great work!

But I cannot achieve the reported accuracy in your paper. In the case where teacher and student have the similar architecture, my accuracy is ~1% lower than your results. And in the case where teacher and student have different architecture, the performance of KD and CRD are even worse than model from scratch. The only change I've made is to wrap the model with nn.DataParalel and run on 8 GPUs. I enlarged the batch_size to make that each gpu's batch_size is the same as your original single gpu setting. I ran all the experiments according to your hyperparameter(several loss weights) in this repo, and just changed the architecture of teacher and student. I wonder if the dataparallel hurts the accuracy or hyperparameters have to be tuned carefully according to each architecture.

Looking forward to your reply.:)

HobbitLong commented 4 years ago

Can you please first follow all default setting on a single GPU to check the performance?

I did not run on 8 GPUs. So the default learning rate 0.05 is perhaps mostly suitable for batch size of 64. Maybe you need to scale the learning rate linearly according to the batch size, see this paper

Even you scale up the learning rate, the default setting might still be slightly better. I have compared two settings: (1) batch size of 64, learning rate 0.05; (2) batch size of 128, learning rate 0.1. In general, training from scatch with (1) gives better vanilla models, so I stick to (1). That's also why If you compare my vanilla models with other papers, you could see our baseline is stronger.

HobbitLong commented 4 years ago

@xuguodong03

I also found something I did not describe in the repo. It has been described in the paper, though.

When training the following 3 models with batch size of 64, you need a learning rate 0f 0.01 to achieve better accuracy (better than 0.05): 'MobileNetV2', 'ShuffleV1', 'ShuffleV2'. I just added two lines here to hard-code it. Please pull it again and sorry for missing it :(

I guess part of the reason of underperformance when student and teacher are of different architectures, is that you used different learning rates. For example, when using above 3 models as student and distilling from teacher model, the learning rate is correctly set as 0.01 (which was hard-coded), but it is incorrect (0.05) for the vanilla model (as I forgot to hard-code it, now fixed). And if you divide the learning rate by # of gpus, e.g., 8, things get reversed. The second wrong one actually becomes better (0.008), and the first one (originally better on single GPU) becomes worse (0.00125).