Closed frigidsun closed 3 years ago
I am very sorry, the most critical step in the code is written wrong. As shown in Fig. 2 of the paper, the final features are obtained by concatenating the features of the two branches. The previous version is
v3_1= v1_1 * v2_1
The right version is
v3_1=torch.cat([v1_1, v2_1], 1)
I have updated the bfe.py file, please use the latest bfe.py to replace the previous version.
Since too many changes were made to the files torchreid/models/bfe.py and torchreid/engine/softmax.py, some of the changes may be problematic, but I forgot to restore them back. If there are other problems, I will fix them as soon as possible.
Thanks for your reply, I updated your source code, saw your changes in the "bfe.py" file, and ran the code again. But I got similar results as before, mAP/rank-1=38.0%/51.9% on Occluded-Duke, 77.6%/90.4% on Market-1501.
To verify the correctness of the code, I re-ran the experiment on Occluded-Duke with the following results: train.log-2021-06-24-19-16-29.txt As the log file shows, the performance reaches about the same level as in the paper around the 60th epoch. I uploaded configs/bfe.yaml, torchreid/models/bfe.py and torchreid/engine/softmax.py from the code I ran locally this time to GitHub, and this time there should be no problem.
Hi, I followed the "configs/bfe.yaml" to run your code, but I got very different result from your paper's reports. The mAP/rank-1 I get on Market-1501 is 77.8%/90.3%; on Occluded-Duke it is 38.2%/52.6%. I used CUDA10 and CUDA9 respectively, and there was no difference in the results. Here is my log, could you please help me check if any setting wrong in the hyperparameters?
python scripts/main.py --config-file configs/bfe.yaml --gpu-devices "8,9" test.eval_freq "10" train.batch_size "64" data.sources "['occludedduke']" data.targets "['occludedduke']" Currently using GPU 8,9
Show configuration adam: beta1: 0.9 beta2: 0.999 cuhk03: classic_split: False labeled_images: False use_metric_cuhk03: False data: combineall: False height: 384 norm_mean: [0.485, 0.456, 0.406] norm_std: [0.229, 0.224, 0.225] root: reid-data save_dir: log/bre sources: ['occludedduke'] split_id: 0 targets: ['occludedduke'] transforms: ['random_flip'] type: image width: 128 workers: 16 loss: name: softmax softmax: label_smooth: True triplet: margin: 0.3 weight_s: 1.0 weight_t: 1.0 weight_x: 1.0 market1501: use_500k_distractors: False model: load_weights: name: bfe pretrained: True resume: rmsprop: alpha: 0.99 sampler: num_instances: 4 train_sampler: RandomSampler sgd: dampening: 0.0 momentum: 0.9 nesterov: False test: batch_size: 128 dist_metric: euclidean eval_freq: 10 evaluate: False normalize_feature: False ranks: [1, 3, 5, 10] rerank: False start_eval: 0 visactmap: False visrank: False visrank_topk: 10 train: base_lr_mult: 0.1 batch_size: 64 fixbase_epoch: 5 gamma: 0.1 lr: 0.0003 lr_scheduler: multi_step max_epoch: 90 new_layers: ['classifier'] open_layers: ['res_part1', 'res_part2', 'classifier1', 'classifier2', 'reduction1', 'reduction2', 'batch_drop', 'batch_crop', 'batch_erase', 'att1', 'att_module2'] optim: adam print_freq: 200 seed: 1 staged_lr: False start_epoch: 0 stepsize: [20, 40] weight_decay: 0.0005 use_gpu: True video: pooling_method: avg sample_method: evenly seq_len: 15
Collecting env info ... System info PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.0
OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.16.20191017-gf6dac38
Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce RTX 2080 Ti GPU 2: GeForce RTX 2080 Ti GPU 3: GeForce RTX 2080 Ti GPU 4: GeForce RTX 2080 Ti GPU 5: GeForce RTX 2080 Ti GPU 6: GeForce RTX 2080 Ti GPU 7: GeForce RTX 2080 Ti GPU 8: GeForce RTX 2080 Ti GPU 9: GeForce RTX 2080 Ti
Nvidia driver version: 430.50 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
Versions of relevant libraries: [pip3] numpy==1.21.0 [pip3] torch==1.4.0 [pip3] torchreid==1.0.6 [pip3] torchvision==0.5.0 [conda] blas 1.0 mkl
[conda] torchvision 0.5.0 py37_cu100 pytorch
Pillow (8.2.0)
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py37h27cfd23_1
[conda] mkl_fft 1.3.0 py37h42c9631_2
[conda] mkl_random 1.2.1 py37ha9443f7_2
[conda] pytorch 1.4.0 py3.7_cuda10.0.130_cudnn7.6.3_0 pytorch [conda] torchreid 1.0.6 dev_0
Building train transforms ...
normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) => Loading train (source) dataset => Loaded Occluded_Duke
subset | # ids | # images | # cameras
train | 702 | 15618 | 8 query | 519 | 2210 | 8 gallery | 1110 | 17661 | 8
=> Loading test (target) dataset => Loaded Occluded_Duke
subset | # ids | # images | # cameras
train | 702 | 15618 | 8 query | 519 | 2210 | 8 gallery | 1110 | 17661 | 8
**** Summary **** train : ['occludedduke']
train datasets : 1
train ids : 702
train images : 15618
train cameras : 8
test : ['occludedduke']
Building model: bfe Building softmax-engine for image-reid => Start training
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.2554 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 21.4% CMC curve Rank-1 : 30.9% Rank-3 : 40.7% Rank-5 : 44.5% Rank-10 : 51.3% Checkpoint saved to "log/bre/model.pth.tar-10"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.2136 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 21.4% CMC curve Rank-1 : 30.9% Rank-3 : 40.7% Rank-5 : 44.5% Rank-10 : 51.3% Checkpoint saved to "log/bre/model.pth.tar-10" Epoch: [11/90][200/244] Loss 1.2469 (1.2296) Loss_x1 1.2284 (1.2404) Loss_x2 1.2960 (1.2619) Loss_x3 1.2054 (1.1734) Loss_mask 0.0022 (0.0025) Loss_max 0.0022 (0.0025) Loss_weight 0.3382 0.3456 0.3162 Lr 0.000300 eta 3:20:01 Epoch: [12/90][200/244] Loss 1.2498 (1.2242) Loss_x1 1.2543 (1.2312) Loss_x2 1.2811 (1.2579) Loss_x3 1.1895 (1.1722) Loss_mask 0.0067 (0.0020) Loss_max 0.0067 (0.0020) Loss_weight 0.3367 0.3459 0.3174 Lr 0.000300 eta 2:48:29 Epoch: [13/90][200/244] Loss 1.2389 (1.2133) Loss_x1 1.2131 (1.2235) Loss_x2 1.2969 (1.2429) Loss_x3 1.1949 (1.1643) Loss_mask 0.0019 (0.0015) Loss_max 0.0019 (0.0015) Loss_weight 0.3376 0.3442 0.3182 Lr 0.000300 eta 2:50:43 Epoch: [14/90][200/244] Loss 1.2891 (1.2044) Loss_x1 1.2461 (1.2099) Loss_x2 1.3697 (1.2358) Loss_x3 1.2366 (1.1595) Loss_mask 0.0012 (0.0012) Loss_max 0.0012 (0.0012) Loss_weight 0.3359 0.3447 0.3194 Lr 0.000300 eta 2:44:27 Epoch: [15/90][200/244] Loss 1.2265 (1.1905) Loss_x1 1.2194 (1.1939) Loss_x2 1.2731 (1.2223) Loss_x3 1.1794 (1.1484) Loss_mask 0.0010 (0.0010) Loss_max 0.0010 (0.0010) Loss_weight 0.3351 0.3448 0.3202 Lr 0.000300 eta 2:52:52 Epoch: [16/90][200/244] Loss 1.2180 (1.1935) Loss_x1 1.2035 (1.2035) Loss_x2 1.2618 (1.2178) Loss_x3 1.1824 (1.1539) Loss_mask 0.0010 (0.0007) Loss_max 0.0010 (0.0007) Loss_weight 0.3372 0.3420 0.3208 Lr 0.000300 eta 2:51:21 Epoch: [17/90][200/244] Loss 1.1899 (1.1878) Loss_x1 1.2122 (1.1966) Loss_x2 1.2083 (1.2119) Loss_x3 1.1449 (1.1500) Loss_mask 0.0005 (0.0006) Loss_max 0.0005 (0.0006) Loss_weight 0.3367 0.3419 0.3213 Lr 0.000300 eta 2:47:29 Epoch: [18/90][200/244] Loss 1.1757 (1.1766) Loss_x1 1.1963 (1.1810) Loss_x2 1.1777 (1.2047) Loss_x3 1.1511 (1.1397) Loss_mask 0.0003 (0.0005) Loss_max 0.0003 (0.0005) Loss_weight 0.3352 0.3432 0.3216 Lr 0.000300 eta 2:46:08 Epoch: [19/90][200/244] Loss 1.2165 (1.1694) Loss_x1 1.2233 (1.1755) Loss_x2 1.2335 (1.1934) Loss_x3 1.1905 (1.1355) Loss_mask 0.0004 (0.0004) Loss_max 0.0004 (0.0004) Loss_weight 0.3357 0.3418 0.3225 Lr 0.000300 eta 2:33:19 Epoch: [20/90][200/244] Loss 1.2215 (1.1623) Loss_x1 1.2502 (1.1693) Loss_x2 1.2460 (1.1850) Loss_x3 1.1627 (1.1295) Loss_mask 0.0003 (0.0003) Loss_max 0.0003 (0.0003) Loss_weight 0.3359 0.3413 0.3228 Lr 0.000300 eta 2:33:04
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0893 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 24.0% CMC curve Rank-1 : 34.5% Rank-3 : 45.4% Rank-5 : 50.8% Rank-10 : 56.7% Checkpoint saved to "log/bre/model.pth.tar-20"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0951 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 24.0% CMC curve Rank-1 : 34.5% Rank-3 : 45.4% Rank-5 : 50.8% Rank-10 : 56.7% Checkpoint saved to "log/bre/model.pth.tar-20" Epoch: [21/90][200/244] Loss 1.0952 (1.1118) Loss_x1 1.0876 (1.1090) Loss_x2 1.1028 (1.1303) Loss_x3 1.0944 (1.0943) Loss_mask 0.0003 (0.0003) Loss_max 0.0003 (0.0003) Loss_weight 0.3326 0.3397 0.3277 Lr 0.000030 eta 2:44:16 Epoch: [22/90][200/244] Loss 1.0920 (1.0795) Loss_x1 1.0961 (1.0773) Loss_x2 1.1055 (1.0980) Loss_x3 1.0730 (1.0616) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3327 0.3397 0.3275 Lr 0.000030 eta 2:41:48 Epoch: [23/90][200/244] Loss 1.0591 (1.0696) Loss_x1 1.0641 (1.0689) Loss_x2 1.0673 (1.0871) Loss_x3 1.0450 (1.0514) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3332 0.3393 0.3274 Lr 0.000030 eta 2:35:01 Epoch: [24/90][200/244] Loss 1.0642 (1.0636) Loss_x1 1.0682 (1.0628) Loss_x2 1.0703 (1.0815) Loss_x3 1.0537 (1.0450) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3332 0.3395 0.3273 Lr 0.000030 eta 2:34:49 Epoch: [25/90][200/244] Loss 1.0537 (1.0599) Loss_x1 1.0544 (1.0603) Loss_x2 1.0647 (1.0759) Loss_x3 1.0413 (1.0422) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3388 0.3276 Lr 0.000030 eta 2:30:33 Epoch: [26/90][200/244] Loss 1.0515 (1.0572) Loss_x1 1.0565 (1.0576) Loss_x2 1.0688 (1.0738) Loss_x3 1.0275 (1.0389) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3390 0.3274 Lr 0.000030 eta 2:28:00 Epoch: [27/90][200/244] Loss 1.0771 (1.0562) Loss_x1 1.0696 (1.0569) Loss_x2 1.1089 (1.0728) Loss_x3 1.0498 (1.0375) Loss_mask 0.0004 (0.0002) Loss_max 0.0004 (0.0002) Loss_weight 0.3337 0.3390 0.3273 Lr 0.000030 eta 2:26:26 Epoch: [28/90][200/244] Loss 1.0497 (1.0545) Loss_x1 1.0602 (1.0562) Loss_x2 1.0525 (1.0692) Loss_x3 1.0353 (1.0369) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3340 0.3384 0.3276 Lr 0.000030 eta 2:10:37 Epoch: [29/90][200/244] Loss 1.0461 (1.0544) Loss_x1 1.0490 (1.0553) Loss_x2 1.0616 (1.0703) Loss_x3 1.0260 (1.0362) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3338 0.3388 0.3274 Lr 0.000030 eta 2:12:36 Epoch: [30/90][200/244] Loss 1.0515 (1.0536) Loss_x1 1.0546 (1.0549) Loss_x2 1.0604 (1.0691) Loss_x3 1.0389 (1.0355) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3339 0.3386 0.3275 Lr 0.000030 eta 2:12:24
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.1306 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 36.8% CMC curve Rank-1 : 48.4% Rank-3 : 60.2% Rank-5 : 65.1% Rank-10 : 70.5% Checkpoint saved to "log/bre/model.pth.tar-30"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0915 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 36.8% CMC curve Rank-1 : 48.4% Rank-3 : 60.2% Rank-5 : 65.1% Rank-10 : 70.5% Checkpoint saved to "log/bre/model.pth.tar-30" Epoch: [31/90][200/244] Loss 1.0707 (1.0539) Loss_x1 1.0704 (1.0550) Loss_x2 1.0903 (1.0689) Loss_x3 1.0498 (1.0366) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3338 0.3385 0.3277 Lr 0.000030 eta 2:16:56 Epoch: [32/90][200/244] Loss 1.0707 (1.0529) Loss_x1 1.0805 (1.0542) Loss_x2 1.0761 (1.0671) Loss_x3 1.0546 (1.0360) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3339 0.3382 0.3279 Lr 0.000030 eta 2:19:18 Epoch: [33/90][200/244] Loss 1.0493 (1.0535) Loss_x1 1.0507 (1.0548) Loss_x2 1.0683 (1.0671) Loss_x3 1.0274 (1.0376) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3338 0.3380 0.3282 Lr 0.000030 eta 2:21:10 Epoch: [34/90][200/244] Loss 1.0622 (1.0544) Loss_x1 1.0613 (1.0552) Loss_x2 1.0829 (1.0678) Loss_x3 1.0410 (1.0390) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3379 0.3283 Lr 0.000030 eta 2:21:17 Epoch: [35/90][200/244] Loss 1.0441 (1.0516) Loss_x1 1.0410 (1.0525) Loss_x2 1.0630 (1.0648) Loss_x3 1.0268 (1.0363) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3337 0.3379 0.3284 Lr 0.000030 eta 2:17:18 Epoch: [36/90][200/244] Loss 1.0510 (1.0501) Loss_x1 1.0540 (1.0516) Loss_x2 1.0575 (1.0628) Loss_x3 1.0403 (1.0349) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3339 0.3377 0.3284 Lr 0.000030 eta 2:16:21 Epoch: [37/90][200/244] Loss 1.0592 (1.0521) Loss_x1 1.0561 (1.0524) Loss_x2 1.0834 (1.0651) Loss_x3 1.0360 (1.0376) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3335 0.3378 0.3286 Lr 0.000030 eta 2:13:39 Epoch: [38/90][200/244] Loss 1.0473 (1.0504) Loss_x1 1.0549 (1.0512) Loss_x2 1.0545 (1.0621) Loss_x3 1.0320 (1.0369) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3337 0.3374 0.3289 Lr 0.000030 eta 1:54:03 Epoch: [39/90][200/244] Loss 1.0472 (1.0495) Loss_x1 1.0475 (1.0499) Loss_x2 1.0571 (1.0615) Loss_x3 1.0361 (1.0360) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3375 0.3290 Lr 0.000030 eta 1:52:51 Epoch: [40/90][200/244] Loss 1.0860 (1.0488) Loss_x1 1.0728 (1.0492) Loss_x2 1.1098 (1.0597) Loss_x3 1.0732 (1.0364) Loss_mask 0.0004 (0.0002) Loss_max 0.0004 (0.0002) Loss_weight 0.3336 0.3371 0.3293 Lr 0.000030 eta 1:45:22
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.1081 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 37.2% CMC curve Rank-1 : 50.0% Rank-3 : 62.0% Rank-5 : 66.6% Rank-10 : 71.6% Checkpoint saved to "log/bre/model.pth.tar-40"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0943 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 37.2% CMC curve Rank-1 : 50.0% Rank-3 : 62.0% Rank-5 : 66.6% Rank-10 : 71.6% Checkpoint saved to "log/bre/model.pth.tar-40" Epoch: [41/90][200/244] Loss 1.0464 (1.0418) Loss_x1 1.0374 (1.0409) Loss_x2 1.0551 (1.0527) Loss_x3 1.0460 (1.0311) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3331 0.3371 0.3299 Lr 0.000003 eta 1:53:20 Epoch: [42/90][200/244] Loss 1.0369 (1.0388) Loss_x1 1.0365 (1.0382) Loss_x2 1.0528 (1.0500) Loss_x3 1.0203 (1.0272) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3332 0.3372 0.3296 Lr 0.000003 eta 1:51:03 Epoch: [43/90][200/244] Loss 1.0424 (1.0370) Loss_x1 1.0372 (1.0369) Loss_x2 1.0630 (1.0475) Loss_x3 1.0257 (1.0258) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3334 0.3369 0.3297 Lr 0.000003 eta 1:48:43 Epoch: [44/90][200/244] Loss 1.0309 (1.0368) Loss_x1 1.0358 (1.0362) Loss_x2 1.0415 (1.0480) Loss_x3 1.0147 (1.0254) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3332 0.3372 0.3296 Lr 0.000003 eta 1:45:57 Epoch: [45/90][200/244] Loss 1.0296 (1.0368) Loss_x1 1.0271 (1.0360) Loss_x2 1.0382 (1.0483) Loss_x3 1.0225 (1.0252) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3331 0.3373 0.3296 Lr 0.000003 eta 1:44:43 Epoch: [46/90][200/244] Loss 1.0454 (1.0359) Loss_x1 1.0437 (1.0355) Loss_x2 1.0549 (1.0468) Loss_x3 1.0369 (1.0246) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3333 0.3370 0.3297 Lr 0.000003 eta 1:45:14 Epoch: [47/90][200/244] Loss 1.0251 (1.0349) Loss_x1 1.0315 (1.0354) Loss_x2 1.0330 (1.0453) Loss_x3 1.0102 (1.0231) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3369 0.3295 Lr 0.000003 eta 1:42:09 Epoch: [48/90][200/244] Loss 1.0386 (1.0355) Loss_x1 1.0403 (1.0353) Loss_x2 1.0528 (1.0472) Loss_x3 1.0216 (1.0232) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3333 0.3373 0.3293 Lr 0.000003 eta 1:36:27 Epoch: [49/90][200/244] Loss 1.0325 (1.0348) Loss_x1 1.0391 (1.0351) Loss_x2 1.0384 (1.0456) Loss_x3 1.0195 (1.0228) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3335 0.3370 0.3294 Lr 0.000003 eta 1:29:35 Epoch: [50/90][200/244] Loss 1.0280 (1.0350) Loss_x1 1.0319 (1.0352) Loss_x2 1.0344 (1.0462) Loss_x3 1.0172 (1.0225) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3335 0.3372 0.3293 Lr 0.000003 eta 1:31:57
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0879 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.3% CMC curve Rank-1 : 51.2% Rank-3 : 62.3% Rank-5 : 66.7% Rank-10 : 72.5% Checkpoint saved to "log/bre/model.pth.tar-50"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0872 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.3% CMC curve Rank-1 : 51.2% Rank-3 : 62.3% Rank-5 : 66.7% Rank-10 : 72.5% Checkpoint saved to "log/bre/model.pth.tar-50" Epoch: [51/90][200/244] Loss 1.0258 (1.0341) Loss_x1 1.0285 (1.0349) Loss_x2 1.0350 (1.0446) Loss_x3 1.0133 (1.0221) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3369 0.3294 Lr 0.000003 eta 1:32:19 Epoch: [52/90][200/244] Loss 1.0550 (1.0345) Loss_x1 1.0417 (1.0354) Loss_x2 1.0825 (1.0447) Loss_x3 1.0387 (1.0226) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3368 0.3294 Lr 0.000003 eta 1:28:31 Epoch: [53/90][200/244] Loss 1.0452 (1.0336) Loss_x1 1.0539 (1.0347) Loss_x2 1.0501 (1.0441) Loss_x3 1.0308 (1.0214) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3337 0.3369 0.3293 Lr 0.000003 eta 1:25:54 Epoch: [54/90][200/244] Loss 1.0381 (1.0340) Loss_x1 1.0370 (1.0346) Loss_x2 1.0545 (1.0445) Loss_x3 1.0215 (1.0220) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3369 0.3294 Lr 0.000003 eta 1:22:55 Epoch: [55/90][200/244] Loss 1.0374 (1.0336) Loss_x1 1.0356 (1.0344) Loss_x2 1.0569 (1.0444) Loss_x3 1.0183 (1.0212) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3370 0.3293 Lr 0.000003 eta 1:21:09 Epoch: [56/90][200/244] Loss 1.0368 (1.0338) Loss_x1 1.0395 (1.0345) Loss_x2 1.0459 (1.0446) Loss_x3 1.0244 (1.0214) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3371 0.3293 Lr 0.000003 eta 1:18:46 Epoch: [57/90][200/244] Loss 1.0319 (1.0334) Loss_x1 1.0277 (1.0339) Loss_x2 1.0476 (1.0442) Loss_x3 1.0193 (1.0212) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3370 0.3294 Lr 0.000003 eta 1:17:01 Epoch: [58/90][200/244] Loss 1.0403 (1.0328) Loss_x1 1.0473 (1.0340) Loss_x2 1.0451 (1.0431) Loss_x3 1.0279 (1.0205) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3338 0.3369 0.3293 Lr 0.000003 eta 1:08:47 Epoch: [59/90][200/244] Loss 1.0329 (1.0336) Loss_x1 1.0317 (1.0342) Loss_x2 1.0492 (1.0444) Loss_x3 1.0165 (1.0214) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3336 0.3370 0.3294 Lr 0.000003 eta 1:12:56 Epoch: [60/90][200/244] Loss 1.0377 (1.0330) Loss_x1 1.0369 (1.0342) Loss_x2 1.0529 (1.0432) Loss_x3 1.0219 (1.0208) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3338 0.3368 0.3294 Lr 0.000003 eta 1:10:41
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0881 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.5% CMC curve Rank-1 : 51.3% Rank-3 : 63.0% Rank-5 : 67.1% Rank-10 : 72.5% Checkpoint saved to "log/bre/model.pth.tar-60"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0934 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.5% CMC curve Rank-1 : 51.3% Rank-3 : 63.0% Rank-5 : 67.1% Rank-10 : 72.5% Checkpoint saved to "log/bre/model.pth.tar-60" Epoch: [61/90][200/244] Loss 1.0299 (1.0334) Loss_x1 1.0351 (1.0346) Loss_x2 1.0384 (1.0435) Loss_x3 1.0156 (1.0213) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3338 0.3368 0.3294 Lr 0.000003 eta 1:09:03 Epoch: [62/90][200/244] Loss 1.0347 (1.0331) Loss_x1 1.0329 (1.0341) Loss_x2 1.0487 (1.0435) Loss_x3 1.0214 (1.0210) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3369 0.3294 Lr 0.000003 eta 1:06:13 Epoch: [63/90][200/244] Loss 1.0305 (1.0331) Loss_x1 1.0381 (1.0337) Loss_x2 1.0364 (1.0439) Loss_x3 1.0164 (1.0209) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3370 0.3294 Lr 0.000003 eta 1:04:04 Epoch: [64/90][200/244] Loss 1.0278 (1.0329) Loss_x1 1.0330 (1.0336) Loss_x2 1.0309 (1.0434) Loss_x3 1.0191 (1.0208) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3337 0.3369 0.3294 Lr 0.000003 eta 1:00:30 Epoch: [65/90][200/244] Loss 1.0286 (1.0331) Loss_x1 1.0276 (1.0340) Loss_x2 1.0364 (1.0438) Loss_x3 1.0212 (1.0207) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3370 0.3293 Lr 0.000003 eta 0:58:01 Epoch: [66/90][200/244] Loss 1.0343 (1.0326) Loss_x1 1.0325 (1.0330) Loss_x2 1.0476 (1.0435) Loss_x3 1.0219 (1.0204) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3371 0.3294 Lr 0.000003 eta 0:53:40 Epoch: [67/90][200/244] Loss 1.0249 (1.0314) Loss_x1 1.0213 (1.0329) Loss_x2 1.0361 (1.0409) Loss_x3 1.0164 (1.0197) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3339 0.3366 0.3295 Lr 0.000003 eta 0:49:42 Epoch: [68/90][200/244] Loss 1.0259 (1.0323) Loss_x1 1.0226 (1.0331) Loss_x2 1.0321 (1.0424) Loss_x3 1.0224 (1.0205) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3368 0.3295 Lr 0.000003 eta 0:46:29 Epoch: [69/90][200/244] Loss 1.0243 (1.0325) Loss_x1 1.0238 (1.0330) Loss_x2 1.0358 (1.0435) Loss_x3 1.0122 (1.0200) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3371 0.3293 Lr 0.000003 eta 0:48:40 Epoch: [70/90][200/244] Loss 1.0334 (1.0330) Loss_x1 1.0376 (1.0333) Loss_x2 1.0414 (1.0435) Loss_x3 1.0207 (1.0212) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3335 0.3369 0.3295 Lr 0.000003 eta 0:46:40
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0864 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.3% CMC curve Rank-1 : 51.8% Rank-3 : 62.8% Rank-5 : 67.0% Rank-10 : 72.3% Checkpoint saved to "log/bre/model.pth.tar-70"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0923 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.3% CMC curve Rank-1 : 51.8% Rank-3 : 62.8% Rank-5 : 67.0% Rank-10 : 72.3% Checkpoint saved to "log/bre/model.pth.tar-70" Epoch: [71/90][200/244] Loss 1.0276 (1.0318) Loss_x1 1.0303 (1.0327) Loss_x2 1.0367 (1.0422) Loss_x3 1.0151 (1.0196) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3369 0.3294 Lr 0.000003 eta 0:45:42 Epoch: [72/90][200/244] Loss 1.0275 (1.0318) Loss_x1 1.0270 (1.0325) Loss_x2 1.0346 (1.0419) Loss_x3 1.0200 (1.0200) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3337 0.3368 0.3295 Lr 0.000003 eta 0:42:40 Epoch: [73/90][200/244] Loss 1.0256 (1.0315) Loss_x1 1.0287 (1.0325) Loss_x2 1.0285 (1.0409) Loss_x3 1.0193 (1.0203) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3337 0.3366 0.3297 Lr 0.000003 eta 0:40:19 Epoch: [74/90][200/244] Loss 1.0355 (1.0321) Loss_x1 1.0443 (1.0326) Loss_x2 1.0413 (1.0421) Loss_x3 1.0202 (1.0206) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:37:49 Epoch: [75/90][200/244] Loss 1.0281 (1.0327) Loss_x1 1.0343 (1.0329) Loss_x2 1.0349 (1.0430) Loss_x3 1.0146 (1.0211) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3335 0.3369 0.3296 Lr 0.000003 eta 0:35:46 Epoch: [76/90][200/244] Loss 1.0343 (1.0310) Loss_x1 1.0299 (1.0317) Loss_x2 1.0550 (1.0411) Loss_x3 1.0163 (1.0194) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:29:20 Epoch: [77/90][200/244] Loss 1.0384 (1.0310) Loss_x1 1.0457 (1.0315) Loss_x2 1.0434 (1.0418) Loss_x3 1.0256 (1.0188) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3370 0.3294 Lr 0.000003 eta 0:30:23 Epoch: [78/90][200/244] Loss 1.0417 (1.0313) Loss_x1 1.0446 (1.0319) Loss_x2 1.0527 (1.0414) Loss_x3 1.0270 (1.0199) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:27:57 Epoch: [79/90][200/244] Loss 1.0394 (1.0313) Loss_x1 1.0398 (1.0320) Loss_x2 1.0524 (1.0415) Loss_x3 1.0249 (1.0197) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:25:37 Epoch: [80/90][200/244] Loss 1.0442 (1.0307) Loss_x1 1.0496 (1.0315) Loss_x2 1.0448 (1.0403) Loss_x3 1.0378 (1.0194) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3337 0.3366 0.3297 Lr 0.000003 eta 0:23:16
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0899 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 37.9% CMC curve Rank-1 : 51.9% Rank-3 : 62.8% Rank-5 : 67.0% Rank-10 : 72.0% Checkpoint saved to "log/bre/model.pth.tar-80"
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0988 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 37.9% CMC curve Rank-1 : 51.9% Rank-3 : 62.8% Rank-5 : 67.0% Rank-10 : 72.0% Checkpoint saved to "log/bre/model.pth.tar-80" Epoch: [81/90][200/244] Loss 1.0435 (1.0311) Loss_x1 1.0528 (1.0318) Loss_x2 1.0473 (1.0408) Loss_x3 1.0298 (1.0199) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3367 0.3297 Lr 0.000003 eta 0:21:39 Epoch: [82/90][200/244] Loss 1.0302 (1.0309) Loss_x1 1.0343 (1.0312) Loss_x2 1.0331 (1.0411) Loss_x3 1.0228 (1.0194) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3335 0.3369 0.3296 Lr 0.000003 eta 0:19:17 Epoch: [83/90][200/244] Loss 1.0278 (1.0305) Loss_x1 1.0352 (1.0310) Loss_x2 1.0313 (1.0406) Loss_x3 1.0163 (1.0190) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:16:56 Epoch: [84/90][200/244] Loss 1.0371 (1.0306) Loss_x1 1.0393 (1.0310) Loss_x2 1.0466 (1.0405) Loss_x3 1.0244 (1.0195) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3335 0.3367 0.3297 Lr 0.000003 eta 0:13:00 Epoch: [85/90][200/244] Loss 1.0278 (1.0306) Loss_x1 1.0314 (1.0310) Loss_x2 1.0377 (1.0404) Loss_x3 1.0133 (1.0195) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3367 0.3297 Lr 0.000003 eta 0:12:08 Epoch: [86/90][200/244] Loss 1.0406 (1.0307) Loss_x1 1.0317 (1.0306) Loss_x2 1.0524 (1.0409) Loss_x3 1.0367 (1.0197) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3334 0.3369 0.3298 Lr 0.000003 eta 0:09:39 Epoch: [87/90][200/244] Loss 1.0241 (1.0302) Loss_x1 1.0299 (1.0307) Loss_x2 1.0290 (1.0403) Loss_x3 1.0129 (1.0188) Loss_mask 0.0001 (0.0002) Loss_max 0.0001 (0.0002) Loss_weight 0.3336 0.3368 0.3296 Lr 0.000003 eta 0:07:24 Epoch: [88/90][200/244] Loss 1.0229 (1.0298) Loss_x1 1.0230 (1.0304) Loss_x2 1.0322 (1.0398) Loss_x3 1.0127 (1.0185) Loss_mask 0.0003 (0.0002) Loss_max 0.0003 (0.0002) Loss_weight 0.3336 0.3368 0.3297 Lr 0.000003 eta 0:04:59 Epoch: [89/90][200/244] Loss 1.0232 (1.0296) Loss_x1 1.0255 (1.0303) Loss_x2 1.0306 (1.0391) Loss_x3 1.0127 (1.0187) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3336 0.3366 0.3298 Lr 0.000003 eta 0:02:42 Epoch: [90/90][200/244] Loss 1.0347 (1.0300) Loss_x1 1.0304 (1.0303) Loss_x2 1.0411 (1.0400) Loss_x3 1.0316 (1.0190) Loss_mask 0.0002 (0.0002) Loss_max 0.0002 (0.0002) Loss_weight 0.3335 0.3367 0.3298 Lr 0.000003 eta 0:00:24
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.1067 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.2% CMC curve Rank-1 : 52.6% Rank-3 : 62.9% Rank-5 : 67.1% Rank-10 : 72.4% Checkpoint saved to "log/bre/model.pth.tar-90" => Final test
Evaluating occludedduke (source)
Extracting features from query set ... Done, obtained 2210-by-512 matrix Extracting features from gallery set ... Done, obtained 17661-by-512 matrix Speed: 0.0922 sec/batch Computing distance matrix with metric=euclidean ... Computing CMC and mAP ... Results mAP: 38.2% CMC curve Rank-1 : 52.6% Rank-3 : 62.9% Rank-5 : 67.1% Rank-10 : 72.4% Checkpoint saved to "log/bre/model.pth.tar-90" Elapsed 3:46:02