layumi / Person_reID_baseline_pytorch

:bouncing_ball_person: Pytorch ReID: A tiny, friendly, strong pytorch implement of person re-id / vehicle re-id baseline. Tutorial 👉https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial
https://www.zdzheng.xyz
MIT License
4.06k stars 1k forks source link

the performance about warm up #153

Open wangyujie413 opened 5 years ago

wangyujie413 commented 5 years ago

Hi, I found the warm up strategy made the result worse.

Train without warm up python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 64 --stride 1 result: Rank@1:0.887767 Rank@5:0.955463 Rank@10:0.971793 mAP:0.711009

Train with warm up

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --warm_epoch 5 --stride 1 --lr 0.02 --batchsize 64

result: Rank@1:0.847090 Rank@5:0.942696 Rank@10:0.963183 mAP:0.647294

Is there anything wrong with my operation?

layumi commented 5 years ago

You set different learning rates for the two experiments? You may use the following code.

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --warm_epoch 5 --stride 1 --batchsize 64
wangyujie413 commented 5 years ago

I set the learning rate as 0.02 according to the following code in your tutorial: python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5

And although I set the same learning rates for two experiments, the result with warm up is not better than the result without warm up. Train with warm up: python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --warm_epoch 5 --stride 1 --batchsize 64 result: Rank@1:0.873515 Rank@5:0.948040 Rank@10:0.968230 mAP:0.691394

layumi commented 5 years ago

PCB needs lr=0.02 batchsize=64 due to multiple part losses.

Yes. It is possible. However, I suggest you to try more learning rate (such as 0.15). Large min-batch may need large learning rate. (refer to https://arxiv.org/abs/1706.02677)