huanghoujing / AlignedReID-Re-Production-Pytorch

Reproduce AlignedReID: Surpassing Human-Level Performance in Person Re-Identification, using Pytorch.
639 stars 190 forks source link

关于性能提升 #43

Open 92ypli opened 6 years ago

92ypli commented 6 years ago

你好,请教下,就是加入local分支训练后,相对于只用triplet loss训练原始Resnet50性能只提升一个点吗?

huanghoujing commented 6 years ago

我的实验结果是这样的,如果有更好的实验结果,欢迎告知哈。

ZHHJemotion commented 6 years ago

@huanghoujing Hi houjing! But when I trained the model with local branch(local loss and local_dist_own_hard_sample=True) on three datasets, it showed no ~1 point improvement, even worse! I am not clear why? Could you please show me the parameters you set? Thanks a lot!

huanghoujing commented 6 years ago

@ZHHJemotion Do you mean GL + TWGD 87.05% vs GL + LL + TWGALD 88.18% in the Train on Market1501 sheet of AlignedReID-Scores.xlsx? If you refer to this, then it is simply:

python script/experiment/train.py \
-d '(0,)' \
-r 1 \
--dataset market1501 \
--ids_per_batch 32 \
--ims_per_id 4 \
--normalize_feature false \
-gm 0.3 \
-glw 1 \
-llw 0 \
-idlw 0 \
--base_lr 2e-4 \
--lr_decay_type exp \
--exp_decay_at_epoch 151 \
--total_epochs 300

vs.

python script/experiment/train.py \
-d '(0,)' \
-r 1 \
--dataset market1501 \
--ids_per_batch 32 \
--ims_per_id 4 \
--normalize_feature false \
--local_dist_own_hard_sample true \
-gm 0.3 \
-glw 1 \
-llw 1 \
-idlw 0 \
--base_lr 2e-4 \
--lr_decay_type exp \
--exp_decay_at_epoch 151 \
--total_epochs 300
ZHHJemotion commented 6 years ago

@huanghoujing Yes. It is that! And I have the last one question: for with mutual learning, would it improve ~1 points with local distance than without local distance? is "~1 point improvement" also suitable for CUHK03 and Duke? But my experiemnt on CUHK03 and Duke didn't get ~1 point improvement, adding local distance shows similar with that without local distance. Thanks!

huanghoujing commented 6 years ago

@ZHHJemotion In my provided scores, when mutual loss is used, with and without local distance make no much difference.

ghost commented 6 years ago

For better performance ,I found another implementation:

Alignedreid++: Dynamically Matching Local Information for Person Re-Identification. Code

Ken5YX commented 6 years ago

Hi houjing! Why does ldm_loss work only if the local_dist_own_hard_sample is true? The paper mentioned they trained the network with both global and local loss, and only use global features in the inference stage.

vincentman commented 5 years ago

GL + LL + TWGALD: python script/experiment/train.py \ -d '(0,)' \ -r 1 \ --dataset market1501 \ --ids_per_batch 32 \ --ims_per_id 4 \ --normalize_feature false \ --local_dist_own_hard_sample true \ -gm 0.3 \ -glw 1 \ -llw 1 \ -idlw 0 \ --base_lr 2e-4 \ --lr_decay_type exp \ --exp_decay_at_epoch 151 \ --total_epochs 300

Hi @huanghoujing , If I want to train with both global and local distance loss, how do I set glw and llw? According to above script parametres, what does "glw=1 and llw=1" mean? Why does it not make "glw+llw=1"? For example, glw=0.5 and llw=0.5.