xingyizhou / pytorch-pose-hg-3d

PyTorch implementation for 3D human pose estimation
GNU General Public License v3.0
615 stars 141 forks source link

Acc drop significantly during the last epoch of stage1 #16

Closed FANG-Xiaolin closed 5 years ago

FANG-Xiaolin commented 6 years ago

Hi Xingyi, After training the 2D hourglass component for 50+ epochs, the accuracy is approximately 83%, but after the 60th epoch, the accuracy suddenly drop to 43%.

Here's the log. image

xingyizhou commented 6 years ago

Hi, As far as I know, it should be caused by a pytorch internal bug in BN. You can comment model.eval() in testing to see if the validation acc gets better (but it still won't match the desired performance). The bug should not be reproducible. And re-train the network once more (better on another machine) should have different results. Or you can downgrade your pytorch version below 0.1.12, which is a version where I haven't met/ heard about this bug (but still not guaranteed). Please let me know if the above solutions help. Thanks!

FANG-Xiaolin commented 6 years ago

I tried for another 3 times. The train acc is approximately 0.87 during the last epoch(the 60th epoch) but the validation acc changes every time and always lower than 0.50. The validation acc is around 0.80 in the 55th epoch so it seems that there is a sudden drop during the last epoch and I notice that the training loss gets slightly higher during the last epoch.

xingyizhou commented 6 years ago

Hi, Thanks for reporting the problem. However I don't have other solutions yet and will keep looking into it. It might not be a bug of the code, since an isolated implementation of HourglassNet (I am not sure if the bug is from the network architecture) also has this problem (https://github.com/bearpaw/pytorch-pose/issues/33). People there suggest using learning rate 1e-4. You can have a try to see if the bug still exists.

FANG-Xiaolin commented 6 years ago

Hi, Thanks for your advice. Yes it works if using LR 1e-4. The val acc is 0.80+ in this way.

xingyizhou commented 6 years ago

Hi, I have investigated this problem (on another project, while I can not reproduce the bug on this project). It seems it is caused by very large intermediate features (e.g. > 10000) before batch normalization. Then the train() mode is on, it will be normalized be itself so training is OK. But when eval() mode is on, a slight difference (of the intermediate feature) with the BN mean/std from training will results in large offsets for output. I don't know the causal of the problem but it looks mathematically reasonable. However, down-grading PyTorch version to 0.1.12 will eliminate the problem. Please notify me if you have any other observations on this bug. Thanks!

FANG-Xiaolin commented 6 years ago

Hi, Yes I think it is reasonable. Sure I will notify you if I observe something new. Thanks for your reply!

ssnl commented 6 years ago

IIRC, your repo sets batch size to 1. If that is the case it's not really a PyTorch bug. Running stats with batch size = 1 is unstable itself.

xingyizhou commented 6 years ago

Thanks for the suggestion! The training batch size is 6 and testing is 1. When testing, eval() mode is on and the batch size does not affect the computation.

ssnl commented 6 years ago

I see. 6 is still too small though. People usually use >128 with BN.

On Fri, Jun 15, 2018 at 02:25 Xingyi Zhou notifications@github.com wrote:

Thanks for the suggestion! The training batch size is 6 and testing is 1. When testing, eval() mode is on and the batch size does not affect the computation.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/xingyizhou/pytorch-pose-hg-3d/issues/16#issuecomment-397526013, or mute the thread https://github.com/notifications/unsubscribe-auth/AFaWZf1qIJrKrOlsgtOtU14Fr2KuH1iWks5t81N3gaJpZM4TWpam .

xingyizhou commented 6 years ago

Hi all, As pointed by @leoxiaobin, turn off cudnn of BN layer resolves the issue. It can be realized by set torch.backends.cudnn.enabled = False in main.py, which disables cudnn for all layers and slows down the training by about 1.5x time, or re-build pytorch from source by hacking cudnn in BN layers https://github.com/pytorch/pytorch/blob/e8536c08a16b533fe0a9d645dd4255513f9f4fdd/aten/src/ATen/native/Normalization.cpp#L46 .

FANG-Xiaolin commented 6 years ago

Get it. Thanks.

xingyizhou commented 6 years ago

Oh I still want this issue to be opened to wait for better solutions...

FANG-Xiaolin commented 6 years ago

Sure! My bad.

wangg12 commented 5 years ago

@SsnL @xingyizhou Does this bug still exist with pytorch >= 1.0?

ujsyehao commented 4 years ago

@wangg12 I am doing experiments to observe if the bug exists in pytorch >= 1.0.

qiangruoyu commented 4 years ago

@ wangg12 我正在做实验,以观察pytorch> = 1.0中是否存在该错误。

Can you meet this error when the version of pytorch >= 1.0

ygean commented 4 years ago

@ujsyehao 你好,请问你的实验结果如何?

sisrfeng commented 4 years ago

Hi all, As pointed by @leoxiaobin, turn off cudnn of BN layer resolves the issue. It can be realized by set torch.backends.cudnn.enabled = False in main.py, which disables cudnn for all layers and slows down the training by about 1.5x time, or re-build pytorch from source by hacking cudnn in BN layers https://github.com/pytorch/pytorch/blob/e8536c08a16b533fe0a9d645dd4255513f9f4fdd/aten/src/ATen/native/Normalization.cpp#L46 .

torch.backends.cudnn.enabled = Falseinmain.py` Should it be "torch.backends.cudnn.benchmark = False"?

If I have followed this step, I need not modify main.py, right? : For other pytorch version, you can manually open torch/nn/functional.py and find the line with torch.batch_norm and replace the torch.backends.cudnn.enabled with False