yuanyuanli85 / Fast_Human_Pose_Estimation_Pytorch

Pytorch Code for CVPR2019 paper "Fast Human Pose Estimation" https://arxiv.org/abs/1811.05419
Apache License 2.0
325 stars 53 forks source link

Training another student model(s4b2/s4b1) will drop just several epoches #24

Open zengdiqing1994 opened 3 years ago

zengdiqing1994 commented 3 years ago

Hi, I am reproducing your work and think about changing the target student model, such as stack=4, block=2 or stack=4, block=1, and I manually set the num_features and inplanes:

python example/mpii_kd.py -a hg --stacks 4 --blocks 2 --features 32 --inplanes 8 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar

this s4b2 model KD training will drop val acc just on 8th epoch from 56% to %17%,

and used: python example/mpii_kd.py -a hg --stacks 4 --blocks 1 --features 128 --inplanes 32 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar

this command will cause val acc drop at 4th epoch from 42% to 1.6%.

But use : python example/mpii_kd.py -a hg --stacks 2 --blocks 1 --features 64 --inplanes 8 --checkpoint checkpoint/hg_s2b1_f64in8_diqizeng_mobile_fpd --mobile True --teacher_stack 8 --teacher_checkpoint checkpoint/mpii_hg_s8_b1/model_best.pth.tar It looks normal.

Does the model structure input effect a lot? Do the num_features and inplane have the suitable stack and block structure?

yuanyuanli85 commented 3 years ago

That such accuracy drop is kind of weird. I did not meet such things before.

  1. Have you checked the training loss and accuracy? if things go wrong, training loss would become very large.
  2. It may relate to bn issue in pytorch. Have you followed readme to "Disable cudnn for batchnorm layer to solve bug in pytorch0.4.0".
zengdiqing1994 commented 3 years ago

Thanks for your reply!

  1. And actually, for s4b1, the validation loss would be very large but at that time training loss is normally I think, the validation loss will be larger after several epochs, and the validation accuracy has been dropped to almost zero already. See below:

Epoch LR Train Loss Val Loss Train Acc Val Acc

1.000000 0.000250 0.002626 0.009073 0.161942 0.291490 2.000000 0.000250 0.002116 0.009197 0.300657 0.397661 3.000000 0.000250 0.001956 0.004193 0.399708 0.420880 4.000000 0.000250 0.001839 0.067800 0.472296 0.016799 5.000000 0.000250 0.001757 9.869279 0.520801 0.000000 6.000000 0.000250 0.001726 678.551853 0.542042 0.007716 7.000000 0.000250 0.001753 3865.202707 0.544255 0.000042 8.000000 0.000250 0.001758 37764.363059 0.538761 0.000000 9.000000 0.000250 0.001917 933.443237 0.410885 0.000068

  1. I forgot to do that step you mentioned, I will try to disable cudnn for the bn layer, thanks!
zengdiqing1994 commented 3 years ago

And one more question, what do num_feature and inplane really influence? looks like they are the channel variables of the first several layers, but do they affect the accuracy or just affect the size of models? Thanks!

yuanyuanli85 commented 3 years ago

I don't think the network structure has such big impact. The training acc looks good, while validation acc goes to zero. It looks like related to bn layer who behaves differently in train and eval mode. I suspect it has something to do with bn issue I mentioned.

zengdiqing1994 commented 3 years ago

Thanks for your proposals! it is actually the BN layer eval bug on pytorch0.4.0, I disable the cudnn for batchnorm layer, the val loss and val acc seems normal then.

By the way, do you think if I use s8b1 teacher model train s4b2 student, and use s4b2 as the teacher train s1b2, will get better accuracy for s1b2 than directly use s8b1 teacher model train s1b2 student? That means steps KD. Thanks!

yuanyuanli85 commented 3 years ago

Your idea looks interesting, but not sure if this performs better or not. KD is a kind of regulariation.

Ensemble(s8b1, s4b2 ..) to teach s1b2 Iterative KD, s8b1->s4b2->s1b2 .

Good luck!