huanghoujing / EANet

EANet: Enhancing Alignment for Cross-Domain Person Re-identification
397 stars 88 forks source link

for triplet loss #35

Open ssbilakeri opened 3 years ago

ssbilakeri commented 3 years ago

is the python 2.7 and pytorch 1.0.0 are not supportive for tripletloss ??

huanghoujing commented 3 years ago

Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.

By the way, do you run the program in Linux? I ran it in Ubuntu.

ssbilakeri commented 3 years ago

Hello sir, I too ran the code on ubuntu python 2.7 and pytorch 1.0.0

The code works fine with ID and PS loss but when triplet is added it give error . Why is it so ?

On Sun, 13 Dec 2020, 1:22 pm Houjing Huang, notifications@github.com wrote:

Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.

By the way, do you run the program in Linux? I ran it in Ubuntu.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-743969812, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBK4QGEBTE73W7PTKB3SURXCJANCNFSM4UQGVKRA .

ssbilakeri commented 3 years ago

I'm getting below error when included triplet loss. Help me to resolve this issue

File "/home/padmashree/anaconda3/envs/myenv/lib/python2.7/runpy.py", line 174, in _run_module_as_main "main", fname, loader, pkg_name) File "/home/padmashree/anaconda3/envs/myenv/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/padmashree/project_dir/EANet2/package/optim/eanet_trainer.py", line 135, in trainer.train_phases() File "/home/padmashree/project_dir/EANet2/package/optim/eanet_trainer.py", line 126, in train_phases self.train() File "package/optim/reid_trainer.py", line 338, in train self.trainer.train_one_epoch(trial_run_steps=3 if cfg.trial_run else None) File "package/optim/trainer.py", line 36, in train_one_epoch self.train_one_step(batch) File "package/optim/trainer.py", line 24, in train_one_step pred = self.train_forward(batch) File "/home/padmashree/project_dir/EANet2/package/optim/eanet_trainer.py", line 102, in train_forward loss += self.loss_funcs[loss_cfg.name](reid_batch, pred, step=self.trainer.current_step)['loss'] File "package/loss/triplet_loss.py", line 124, in call res3 = self.calculate(torch.stack(pred['feat_list']), batch['label'], hard_type=hard_type) File "package/loss/triplet_loss.py", line 107, in calculate dist_mat = compute_dist(feat, feat, dist_type=cfg.dist_type) File "package/eval/torch_distance.py", line 49, in compute_dist dist = euclidean_dist(array1, array2) File "package/eval/torch_distance.py", line 25, in euclidean_dist xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n) RuntimeError: expand(torch.cuda.FloatTensor{[9, 1, 256]}, size=[9, 9]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)

On Sun, 13 Dec 2020, 2:08 pm Shavantrevva Bilakeri, ssbilakeri@gmail.com wrote:

Hello sir, I too ran the code on ubuntu python 2.7 and pytorch 1.0.0

The code works fine with ID and PS loss but when triplet is added it give error . Why is it so ?

On Sun, 13 Dec 2020, 1:22 pm Houjing Huang, notifications@github.com wrote:

Hi, python 2.7 and pytorch 1.0.0 are absolutely ok to run this code.

By the way, do you run the program in Linux? I ran it in Ubuntu.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-743969812, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBK4QGEBTE73W7PTKB3SURXCJANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi, thank you for your feedback.

To run triplet loss, we have to

  1. Use PK sampling for batch construction
  2. Increase training epochs

I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh, to train with both PS loss and triplet loss on Market1501.

Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit.

Now, you can run the script by

bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh

The result I obtained is

M -> M      [mAP:  86.0%], [cmc1:  95.0%], [cmc5:  98.0%], [cmc10:  98.8%]
M -> C      [mAP:  11.0%], [cmc1:  12.1%], [cmc5:  24.1%], [cmc10:  31.6%]
M -> D      [mAP:  29.2%], [cmc1:  46.7%], [cmc5:  63.2%], [cmc10:  69.5%]

I hope it helps.

ssbilakeri commented 3 years ago

Thank you so much for ur informative response.

On Thu, 17 Dec 2020, 12:25 pm Houjing Huang, notifications@github.com wrote:

Hi, thank you for your feedback.

To run triplet loss, we have to

  1. Use PK sampling for batch construction
  2. Increase training epochs

I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh, to train with both PS loss and triplet loss on Market1501.

Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit https://github.com/huanghoujing/EANet/commit/e46d49528428fbf0c54b93e557e009012c9a34b4 .

Now, you can run the script by

bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh

The result I obtained is

M -> M [mAP: 86.0%], [cmc1: 95.0%], [cmc5: 98.0%], [cmc10: 98.8%] M -> C [mAP: 11.0%], [cmc1: 12.1%], [cmc5: 24.1%], [cmc10: 31.6%] M -> D [mAP: 29.2%], [cmc1: 46.7%], [cmc5: 63.2%], [cmc10: 69.5%]

I hope it helps.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-747250377, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBO2TJRUMOOQRQWC23TSVGTOLANCNFSM4UQGVKRA .

ssbilakeri commented 3 years ago

Hi sir, In your EANet paper did u train the model including triplet loss ?? Kindly reply

On Thu, 17 Dec 2020, 9:15 pm Shavantrevva Bilakeri, ssbilakeri@gmail.com wrote:

Thank you so much for ur informative response.

On Thu, 17 Dec 2020, 12:25 pm Houjing Huang, notifications@github.com wrote:

Hi, thank you for your feedback.

To run triplet loss, we have to

  1. Use PK sampling for batch construction
  2. Increase training epochs

I provide a setting file paper_configs/PAP_S_PS_Triplet_Loss_Market1501.txt, and a script script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh, to train with both PS loss and triplet loss on Market1501.

Besides, I also fixed a mistake in part index when feeding feature to triplet loss. You can find the detail in this commit https://github.com/huanghoujing/EANet/commit/e46d49528428fbf0c54b93e557e009012c9a34b4 .

Now, you can run the script by

bash script/exp/train_PAP_S_PS_Triplet_Loss_Market1501.sh

The result I obtained is

M -> M [mAP: 86.0%], [cmc1: 95.0%], [cmc5: 98.0%], [cmc10: 98.8%] M -> C [mAP: 11.0%], [cmc1: 12.1%], [cmc5: 24.1%], [cmc10: 31.6%] M -> D [mAP: 29.2%], [cmc1: 46.7%], [cmc5: 63.2%], [cmc10: 69.5%]

I hope it helps.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-747250377, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBO2TJRUMOOQRQWC23TSVGTOLANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi ssbilakeri, I did not use triplet loss in the paper.

ssbilakeri commented 3 years ago

Hello sir, I'm trying to re-produce your paper result with re-ranking unfortunately I'm not able to do it.Since I need your paper result with re-ranking applied to compare with my work .could u please do it for me. It will be a great help Thank you

On Mon, 21 Dec 2020, 9:33 am Houjing Huang, notifications@github.com wrote:

Hi ssbilakeri, I did not use triplet loss in the paper.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-748744501, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBOTSE45VUNQTKSVK5LSV3CIDANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?

ssbilakeri commented 3 years ago

I need it for PAP_S_PS (where ID loss and segmentation loss is used). Kindly help in in this regard. I will be looking forward to your response. Thank you.

On Sun, 27 Dec 2020, 5:17 pm Houjing Huang, notifications@github.com wrote:

Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751457930, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBKMJFZ7S2TPJ2AKWXDSW4NE3ANCNFSM4UQGVKRA .

ssbilakeri commented 3 years ago

Hi sir, when I run your code with re-ranking getting below results. kindly suggest what could be the problem.

I have attached the code with this mail. please help me . Thank you.

Extract Feature: 0%| | 0/106 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pklLoaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl

Extract Feature: 100%|##########################################################| 106/106 [00:13<00:00, 7.75 batches/s] Extract Feature: 0%| | 0/498 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/market1501/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 498/498 [01:04<00:00, 7.75 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (3368, 2304) dic['q_label'].shape: (3368,) dic['q_cam'].shape: (3368,) dic['g_feat'].shape: (15913, 2304) dic['g_label'].shape: (15913,) dic['g_cam'].shape: (15913,) M -> M [mAP: 1.5%], [cmc1: 7.3%], [cmc5: 14.5%], [cmc10: 19.3%] Extract Feature: 0%| | 0/44 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Extract Feature: 100%|############################################################| 44/44 [00:05<00:00, 7.79 batches/s] Extract Feature: 0%| | 0/167 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/cuhk03_np_detected_jpg/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 167/167 [00:20<00:00, 8.10 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (1400, 2304) dic['q_label'].shape: (1400,) dic['q_cam'].shape: (1400,) dic['g_feat'].shape: (5332, 2304) dic['g_label'].shape: (5332,) dic['g_cam'].shape: (5332,) M -> C [mAP: 0.2%], [cmc1: 0.1%], [cmc5: 0.5%], [cmc10: 1.7%] Extract Feature: 0%| | 0/70 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Extract Feature: 100%|############################################################| 70/70 [00:08<00:00, 7.90 batches/s] Extract Feature: 0%| | 0/552 [00:00<?, ? batches/s]Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Loaded pickle file /home/padmashree/project_dir/dataset/duke/im_path_to_kpt.pkl Extract Feature: 100%|##########################################################| 552/552 [01:07<00:00, 8.15 batches/s] => Eval Statistics: dic.keys(): ['g_feat', 'q_feat', 'q_visible', 'q_label', 'q_cam', 'g_visible', 'g_label', 'g_cam'] dic['q_feat'].shape: (2228, 2304) dic['q_label'].shape: (2228,) dic['q_cam'].shape: (2228,) dic['g_feat'].shape: (17661, 2304) dic['g_label'].shape: (17661,) dic['g_cam'].shape: (17661,) M -> D [mAP: 0.3%], [cmc1: 1.2%], [cmc5: 2.8%], [cmc10: 4.3%]

On Sun, 27 Dec 2020, 6:38 pm Shavantrevva Bilakeri, ssbilakeri@gmail.com wrote:

I need it for PAP_S_PS (where ID loss and segmentation loss is used). Kindly help in in this regard. I will be looking forward to your response. Thank you.

On Sun, 27 Dec 2020, 5:17 pm Houjing Huang, notifications@github.com wrote:

Hi, ssbilakeri, for which Table of the paper do you need the re-ranking score?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751457930, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBKMJFZ7S2TPJ2AKWXDSW4NE3ANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

It seems that the trained model weight is not loaded.

ssbilakeri commented 3 years ago

Could you please check with your code hope you have trained weights

On Mon, 28 Dec 2020, 8:26 pm Houjing Huang, notifications@github.com wrote:

It seems that the trained model weight is not loaded.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751738171, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBIQXJJ6RGPQ2AOOLUTSXCMAXANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh).

The original scores, as well as the re-ranking scores are as follows.

mAP Rank-1 Rank-5 Rank-10
M -> M 85.6 94.6 98.2 99.0
ReRank M -> M 93.5 95.7 97.5 98.3
M -> C 12.8 14.2 28.1 35.4
ReRank M -> C 19.4 17.6 28.1 35.9
M -> D 31.7 51.4 67.2 72.5
ReRank M -> D 47.6 57.6 67.9 73.4
C -> M 33.3 59.4 73.7 78.7
ReRank C -> M 47.3 64.0 72.0 76.1
C -> C 66.7 72.5 86.1 91.3
ReRank C -> C 80.8 80.1 86.9 92.2
C -> D 22.0 39.3 54.4 60.3
ReRank C -> D 36.1 47.7 57.5 61.8
D -> M 32.8 61.7 77.2 83.0
ReRank D -> M 48.0 65.6 74.1 78.8
D -> C 9.6 11.4 22.7 28.9
ReRank D -> C 15.4 14.4 22.1 28.7
D -> D 74.6 87.5 93.4 95.3
ReRank D -> D 85.5 89.7 93.6 95.2

I update the code, so that it can test with re-ranking now. Please refer to this commit. You can run it for yourself, by setting cfg.eval.rerank to True in package/config/default.py.

ssbilakeri commented 3 years ago

Thank you very much for your valuable response. Thanks a lot.

On Tue, Dec 29, 2020 at 9:10 AM Houjing Huang notifications@github.com wrote:

Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh).

The original scores, as well as the re-ranking scores are as follows. mAP Rank-1 Rank-5 Rank-10 M -> M 85.6 94.6 98.2 99.0 ReRank M -> M 93.5 95.7 97.5 98.3 M -> C 12.8 14.2 28.1 35.4 ReRank M -> C 19.4 17.6 28.1 35.9 M -> D 31.7 51.4 67.2 72.5 ReRank M -> D 47.6 57.6 67.9 73.4 C -> M 33.3 59.4 73.7 78.7 ReRank C -> M 47.3 64.0 72.0 76.1 C -> C 66.7 72.5 86.1 91.3 ReRank C -> C 80.8 80.1 86.9 92.2 C -> D 22.0 39.3 54.4 60.3 ReRank C -> D 36.1 47.7 57.5 61.8 D -> M 32.8 61.7 77.2 83.0 ReRank D -> M 48.0 65.6 74.1 78.8 D -> C 9.6 11.4 22.7 28.9 ReRank D -> C 15.4 14.4 22.1 28.7 D -> D 74.6 87.5 93.4 95.3 ReRank D -> D 85.5 89.7 93.6 95.2

I update the code, so that it can test with re-ranking now. Please refer to this commit https://github.com/huanghoujing/EANet/commit/a38f12477e3edd625699f5a1beae92181e2c6b62. You can run it for yourself, by setting cfg.eval.rerank to True in package/config/default.py.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751935474, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMU2NOGZZEGCDXCHZTSXFFR3ANCNFSM4UQGVKRA .

ssbilakeri commented 3 years ago

Hello sir, when I train your model considering only segmentation loss it gives so less accuracy. have you run with only segmentation loss ? then what accuracy did you get?

for me result is like this Epoch 60 M->M: 6.8 ( 1.4), M->C: 0.1 ( 0.2), M->D: 5.7 ( 1.2)

kindly respond.

On Tue, Dec 29, 2020 at 11:08 AM Shavantrevva Bilakeri ssbilakeri@gmail.com wrote:

Thank you very much for your valuable response. Thanks a lot.

On Tue, Dec 29, 2020 at 9:10 AM Houjing Huang notifications@github.com wrote:

Hi, ssbilakeri. I have tested the re-ranking results for PAP_S_PS models (the script to run this is script/exp/test_PAP_S_PS_reranking.sh).

The original scores, as well as the re-ranking scores are as follows. mAP Rank-1 Rank-5 Rank-10 M -> M 85.6 94.6 98.2 99.0 ReRank M -> M 93.5 95.7 97.5 98.3 M -> C 12.8 14.2 28.1 35.4 ReRank M -> C 19.4 17.6 28.1 35.9 M -> D 31.7 51.4 67.2 72.5 ReRank M -> D 47.6 57.6 67.9 73.4 C -> M 33.3 59.4 73.7 78.7 ReRank C -> M 47.3 64.0 72.0 76.1 C -> C 66.7 72.5 86.1 91.3 ReRank C -> C 80.8 80.1 86.9 92.2 C -> D 22.0 39.3 54.4 60.3 ReRank C -> D 36.1 47.7 57.5 61.8 D -> M 32.8 61.7 77.2 83.0 ReRank D -> M 48.0 65.6 74.1 78.8 D -> C 9.6 11.4 22.7 28.9 ReRank D -> C 15.4 14.4 22.1 28.7 D -> D 74.6 87.5 93.4 95.3 ReRank D -> D 85.5 89.7 93.6 95.2

I update the code, so that it can test with re-ranking now. Please refer to this commit https://github.com/huanghoujing/EANet/commit/a38f12477e3edd625699f5a1beae92181e2c6b62. You can run it for yourself, by setting cfg.eval.rerank to True in package/config/default.py.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-751935474, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMU2NOGZZEGCDXCHZTSXFFR3ANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.

ssbilakeri commented 3 years ago

I was not knowing that. Thank you for the information.

On Wed, 30 Dec 2020, 12:00 pm Houjing Huang, notifications@github.com wrote:

Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-752345030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMEAE7AHM4VTN3HT2DSXLCHTANCNFSM4UQGVKRA .

ssbilakeri commented 3 years ago

Hi sir, how did you partition the feature map with keypoint delimitation. Which part of the code does that? Help me to understand Thank you.

On Wed, Dec 30, 2020 at 12:22 PM Shavantrevva Bilakeri ssbilakeri@gmail.com wrote:

I was not knowing that. Thank you for the information.

On Wed, 30 Dec 2020, 12:00 pm Houjing Huang, notifications@github.com wrote:

Hi, ssbilakeri. That's normal. Because segmentation does not learn anything about person re-identification. You have to train with at least one kind of re-identification loss, i.e. id loss or triplet loss.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huanghoujing/EANet/issues/35#issuecomment-752345030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOYJBMEAE7AHM4VTN3HT2DSXLCHTANCNFSM4UQGVKRA .

huanghoujing commented 3 years ago

Hi, package/data/kpt_to_pap_mask.py is the code that partitions body into regions using keypoints.