svip-lab / HRNet-for-Fashion-Landmark-Estimation.PyTorch

[DeepFashion2 Challenge] Fashion Landmark Estimation with HRNet
MIT License
132 stars 22 forks source link

Question about testing #5

Open ousinkou opened 3 years ago

ousinkou commented 3 years ago

val_400_gt_pred

I test the checkpoint, using the command in Readme. python tools/test.py \ --cfg experiments/deepfashion2/hrnet/w48_384x288_adam_lr1e-3.yaml \ TEST.MODEL_FILE models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth \ TEST.USE_GT_BBOX True

I open the debug switch in config file, and the saved image result is very bad. Any show problem?

ShenhanQian commented 3 years ago

Could you please check the mAP metric, so that we can know if the problem is with the visualization script or with the model itself?

ousinkou commented 3 years ago

Thanks for the reply. But sorry I can't find the mAP in the log. My log is below.

2020-12-07 16:51:12,290 Namespace(cfg='experiments/deepfashion2/hrnet/w48_384x288_adam_lr1e-3.yaml', dataDir='', logDir='', modelDir='', opts=['TEST.MODEL_FILE', 'work_dir/save/models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth', 'TEST.USE_GT_BBOX', 'True'], prevModelDir='') 2020-12-07 16:51:12,290 AUTO_RESUME: False CUDNN: BENCHMARK: True DETERMINISTIC: False ENABLED: True DATASET: COLOR_RGB: False DATASET: deepfashion2 DATA_FORMAT: jpg FLIP: True HYBRID_JOINTS_TYPE: MINI_DATASET: False NUM_JOINTS_HALF_BODY: 8 PROB_HALF_BODY: 0.3 ROOT: data/deepfashion2/ ROT_FACTOR: 15 SCALE_FACTOR: 0.1 SELECT_CAT: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] SELECT_DATA: False TEST_SET: validation TRAIN_SET: train DATA_DIR: DEBUG: DEBUG: True SAVE_BATCH_IMAGES_GT: False SAVE_BATCH_IMAGES_GT_PRED: True SAVE_BATCH_IMAGES_PRED: False SAVE_HEATMAPS_GT: False SAVE_HEATMAPS_PRED: False GPUS: (1,) LOG_DIR: log LOSS: TOPK: 8 USE_DIFFERENT_JOINTS_WEIGHT: False USE_OHKM: False USE_TARGET_WEIGHT: True MODEL: EXTRA: FINAL_CONV_KERNEL: 1 PRETRAINED_LAYERS: ['conv1', 'bn1', 'conv2', 'bn2', 'layer1', 'transition1', 'stage2', 'transition2', 'stage3', 'transition3', 'stage4'] STAGE2: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4] NUM_BRANCHES: 2 NUM_CHANNELS: [48, 96] NUM_MODULES: 1 STAGE3: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4, 4] NUM_BRANCHES: 3 NUM_CHANNELS: [48, 96, 192] NUM_MODULES: 4 STAGE4: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4, 4, 4] NUM_BRANCHES: 4 NUM_CHANNELS: [48, 96, 192, 384] NUM_MODULES: 3 GCN_INPUT_FEATURE_DIM: 48 GCN_NODE_FEATURE_DIM: 256 GCN_NUM_LAYERS: 3 GCN_NUM_NODES: 294 HEATMAP_SIZE: [72, 96] IMAGE_SIZE: [288, 384] INIT_WEIGHTS: True NAME: pose_hrnet NUM_JOINTS: 294 PRETRAINED: SIGMA: 2 TAG_PER_JOINT: True TARGET_TYPE: gaussian OUTPUT_DIR: output PIN_MEMORY: True PRINT_FREQ: 100 RANK: 0 TAG: TEST: BATCH_SIZE_PER_GPU: 8 BBOX_THRE: 1.0 COCO_BBOX_FILE: DEEPFASHION2_BBOX_FILE: FLIP_TEST: True IMAGE_THRE: 0.0 IN_VIS_THRE: 0.2 MODEL_FILE: work_dir/save/models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth NMS_THRE: 1.0 OKS_THRE: 0.9 POST_PROCESS: True SHIFT_HEATMAP: True SOFT_NMS: False USE_GT_BBOX: True TRAIN: BATCH_SIZE_PER_GPU: 8 BEGIN_EPOCH: 0 CHECKPOINT: END_EPOCH: 210 GAMMA1: 0.99 GAMMA2: 0.0 LR: 0.001 LR_FACTOR: 0.1 LR_STEP: [170, 200] MOMENTUM: 0.9 NESTEROV: False OPTIMIZER: adam RESUME: False SHUFFLE: True WD: 0.0 WORKERS: 4 2020-12-07 16:51:12,305 Let's use 1 GPUs! 2020-12-07 16:51:12,713 => loading model from work_dir/save/models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth 2020-12-07 16:51:21,696 => classes: ['background', 'short_sleeved_shirt', 'long_sleeved_shirt', 'short_sleeved_outwear', 'long_sleeved_outwear', 'vest', 'sling', 'shorts', 'trousers', 'skirt', 'short_sleeved_dress', 'long_sleeved_dress', 'vest_dress', 'sling_dress'] 2020-12-07 16:51:21,696 => num_images: 32153 2020-12-07 16:51:38,257 => load 52239 samples 2020-12-07 16:51:38,258 => Start testing... 2020-12-07 16:51:39,959 Test: [0/6530] Time 1.699 (1.699) Loss 0.00073 (0.00073) acc 0.762 (0.762) 2020-12-07 16:52:31,337 Test: [100/6530] Time 0.517 (0.526) Loss 0.00072 (0.00071) acc 0.829 (0.830) 2020-12-07 16:53:24,311 Test: [200/6530] Time 0.521 (0.528) Loss 0.00049 (0.00070) acc 0.900 (0.828) 2020-12-07 16:54:17,274 Test: [300/6530] Time 0.529 (0.528) Loss 0.00049 (0.00070) acc 0.857 (0.824) 2020-12-07 16:55:11,253 Test: [400/6530] Time 0.532 (0.531) Loss 0.00105 (0.00070) acc 0.720 (0.828) 2020-12-07 16:56:04,546 Test: [500/6530] Time 0.527 (0.532) Loss 0.00077 (0.00069) acc 0.819 (0.827) 2020-12-07 16:56:58,243 Test: [600/6530] Time 0.538 (0.532) Loss 0.00066 (0.00069) acc 0.814 (0.828) 2020-12-07 16:57:51,840 Test: [700/6530] Time 0.568 (0.533) Loss 0.00043 (0.00069) acc 0.944 (0.829) 2020-12-07 16:58:45,210 Test: [800/6530] Time 0.527 (0.533) Loss 0.00097 (0.00069) acc 0.695 (0.830) 2020-12-07 16:59:38,753 Test: [900/6530] Time 0.528 (0.533) Loss 0.00040 (0.00068) acc 0.942 (0.831) 2020-12-07 17:00:32,049 Test: [1000/6530] Time 0.529 (0.533) Loss 0.00067 (0.00069) acc 0.885 (0.829) 2020-12-07 17:01:25,238 Test: [1100/6530] Time 0.524 (0.533) Loss 0.00051 (0.00069) acc 0.861 (0.828) 2020-12-07 17:02:18,380 Test: [1200/6530] Time 0.538 (0.533) Loss 0.00057 (0.00069) acc 0.891 (0.829) 2020-12-07 17:03:11,405 Test: [1300/6530] Time 0.526 (0.533) Loss 0.00072 (0.00069) acc 0.773 (0.828) 2020-12-07 17:04:04,520 Test: [1400/6530] Time 0.525 (0.533) Loss 0.00042 (0.00069) acc 0.941 (0.828) 2020-12-07 17:04:58,150 Test: [1500/6530] Time 0.531 (0.533) Loss 0.00044 (0.00069) acc 0.973 (0.827) 2020-12-07 17:05:51,959 Test: [1600/6530] Time 0.534 (0.533) Loss 0.00071 (0.00069) acc 0.880 (0.827) 2020-12-07 17:06:45,249 Test: [1700/6530] Time 0.530 (0.533) Loss 0.00066 (0.00069) acc 0.782 (0.827) 2020-12-07 17:07:38,836 Test: [1800/6530] Time 0.530 (0.533) Loss 0.00082 (0.00069) acc 0.743 (0.827) 2020-12-07 17:08:32,303 Test: [1900/6530] Time 0.535 (0.533) Loss 0.00093 (0.00069) acc 0.764 (0.828) 2020-12-07 17:09:26,540 Test: [2000/6530] Time 0.537 (0.534) Loss 0.00024 (0.00068) acc 1.000 (0.829) 2020-12-07 17:10:20,748 Test: [2100/6530] Time 0.536 (0.534) Loss 0.00066 (0.00068) acc 0.906 (0.832) 2020-12-07 17:11:14,560 Test: [2200/6530] Time 0.535 (0.534) Loss 0.00076 (0.00067) acc 0.865 (0.834) 2020-12-07 17:12:08,278 Test: [2300/6530] Time 0.534 (0.535) Loss 0.00055 (0.00067) acc 0.872 (0.836) 2020-12-07 17:13:01,882 Test: [2400/6530] Time 0.528 (0.535) Loss 0.00035 (0.00067) acc 0.994 (0.838) 2020-12-07 17:13:55,797 Test: [2500/6530] Time 0.545 (0.535) Loss 0.00050 (0.00067) acc 0.910 (0.839) 2020-12-07 17:14:49,718 Test: [2600/6530] Time 0.532 (0.535) Loss 0.00035 (0.00066) acc 0.994 (0.840) 2020-12-07 17:15:43,308 Test: [2700/6530] Time 0.531 (0.535) Loss 0.00053 (0.00066) acc 0.781 (0.842) 2020-12-07 17:16:36,493 Test: [2800/6530] Time 0.534 (0.535) Loss 0.00072 (0.00065) acc 0.871 (0.843) 2020-12-07 17:17:30,637 Test: [2900/6530] Time 0.533 (0.535) Loss 0.00074 (0.00065) acc 0.850 (0.845) 2020-12-07 17:18:24,826 Test: [3000/6530] Time 0.530 (0.535) Loss 0.00041 (0.00065) acc 0.882 (0.846) 2020-12-07 17:19:18,696 Test: [3100/6530] Time 0.544 (0.535) Loss 0.00039 (0.00065) acc 0.934 (0.847) 2020-12-07 17:20:12,405 Test: [3200/6530] Time 0.585 (0.536) Loss 0.00082 (0.00065) acc 0.778 (0.847) 2020-12-07 17:21:06,034 Test: [3300/6530] Time 0.536 (0.536) Loss 0.00064 (0.00064) acc 0.882 (0.849) 2020-12-07 17:21:58,961 Test: [3400/6530] Time 0.528 (0.535) Loss 0.00053 (0.00064) acc 0.922 (0.849) 2020-12-07 17:22:53,730 Test: [3500/6530] Time 0.624 (0.536) Loss 0.00049 (0.00064) acc 0.881 (0.851) 2020-12-07 17:23:48,798 Test: [3600/6530] Time 0.530 (0.536) Loss 0.00050 (0.00064) acc 0.931 (0.852) 2020-12-07 17:24:42,763 Test: [3700/6530] Time 0.570 (0.536) Loss 0.00060 (0.00064) acc 0.906 (0.853) 2020-12-07 17:25:37,084 Test: [3800/6530] Time 0.527 (0.536) Loss 0.00081 (0.00064) acc 0.855 (0.854) 2020-12-07 17:26:30,096 Test: [3900/6530] Time 0.527 (0.536) Loss 0.00070 (0.00064) acc 0.864 (0.854) 2020-12-07 17:27:23,219 Test: [4000/6530] Time 0.525 (0.536) Loss 0.00057 (0.00064) acc 0.932 (0.855) 2020-12-07 17:28:18,799 Test: [4100/6530] Time 0.603 (0.537) Loss 0.00036 (0.00064) acc 0.995 (0.856) 2020-12-07 17:29:16,117 Test: [4200/6530] Time 0.558 (0.537) Loss 0.00046 (0.00063) acc 0.894 (0.856) 2020-12-07 17:30:12,638 Test: [4300/6530] Time 0.552 (0.538) Loss 0.00069 (0.00063) acc 0.823 (0.857) 2020-12-07 17:31:09,642 Test: [4400/6530] Time 0.554 (0.539) Loss 0.00054 (0.00063) acc 0.914 (0.858) 2020-12-07 17:32:05,689 Test: [4500/6530] Time 0.547 (0.539) Loss 0.00040 (0.00063) acc 0.907 (0.858) 2020-12-07 17:33:01,255 Test: [4600/6530] Time 0.537 (0.540) Loss 0.00074 (0.00063) acc 0.822 (0.858) 2020-12-07 17:33:54,588 Test: [4700/6530] Time 0.529 (0.540) Loss 0.00065 (0.00063) acc 0.830 (0.859) 2020-12-07 17:34:48,093 Test: [4800/6530] Time 0.533 (0.539) Loss 0.00043 (0.00063) acc 0.945 (0.860) 2020-12-07 17:35:41,775 Test: [4900/6530] Time 0.530 (0.539) Loss 0.00052 (0.00062) acc 0.854 (0.861) 2020-12-07 17:36:36,157 Test: [5000/6530] Time 0.564 (0.539) Loss 0.00052 (0.00062) acc 0.925 (0.861) 2020-12-07 17:37:33,585 Test: [5100/6530] Time 0.554 (0.540) Loss 0.00030 (0.00062) acc 1.000 (0.862) 2020-12-07 17:38:31,235 Test: [5200/6530] Time 0.532 (0.541) Loss 0.00056 (0.00062) acc 0.820 (0.862) 2020-12-07 17:39:24,817 Test: [5300/6530] Time 0.532 (0.541) Loss 0.00051 (0.00062) acc 0.878 (0.862) 2020-12-07 17:40:18,777 Test: [5400/6530] Time 0.543 (0.541) Loss 0.00067 (0.00062) acc 0.912 (0.863) 2020-12-07 17:41:27,945 Test: [5500/6530] Time 0.577 (0.543) Loss 0.00049 (0.00062) acc 0.942 (0.863) 2020-12-07 17:42:45,015 Test: [5600/6530] Time 0.604 (0.548) Loss 0.00073 (0.00062) acc 0.828 (0.864) 2020-12-07 17:43:45,152 Test: [5700/6530] Time 0.636 (0.548) Loss 0.00055 (0.00062) acc 0.927 (0.864) 2020-12-07 17:44:46,309 Test: [5800/6530] Time 0.623 (0.550) Loss 0.00092 (0.00062) acc 0.761 (0.864) 2020-12-07 17:45:46,572 Test: [5900/6530] Time 0.593 (0.550) Loss 0.00043 (0.00062) acc 0.992 (0.864) 2020-12-07 17:46:42,447 Test: [6000/6530] Time 0.552 (0.551) Loss 0.00046 (0.00062) acc 0.950 (0.865) 2020-12-07 17:47:37,999 Test: [6100/6530] Time 0.569 (0.551) Loss 0.00052 (0.00062) acc 0.983 (0.865) 2020-12-07 17:48:33,600 Test: [6200/6530] Time 0.554 (0.551) Loss 0.00047 (0.00062) acc 0.898 (0.865) 2020-12-07 17:49:28,888 Test: [6300/6530] Time 0.544 (0.551) Loss 0.00083 (0.00062) acc 0.803 (0.865) 2020-12-07 17:50:23,733 Test: [6400/6530] Time 0.542 (0.551) Loss 0.00058 (0.00062) acc 0.936 (0.866) 2020-12-07 17:51:17,464 Test: [6500/6530] Time 0.538 (0.551) Loss 0.00054 (0.00062) acc 0.912 (0.866) 2020-12-07 17:52:19,706 => writing results json to output/deepfashion2/pose_hrnet/w48_384x288_adam_lr1e-3/2020-12-07-16-51/results/keypoints_validation_results_0.json

In the keypoints_validation_results_0.json file, there is only scores for every image.

ShenhanQian commented 3 years ago

Since the acc value on the last of each line is quite high, I think the problem is in the visualization script. Let's debug.

chiraq440 commented 3 years ago

@ousinkou were you able to resolve the issue since my log looks exactly as yours.

moranxiachong commented 3 years ago

fixed vis bugs:

  1. remove preds transform back process
  2. vis.py: change these joint[0]s and joint[1]s to local variables
AstitvaSri commented 3 years ago

fixed vis bugs:

  1. remove preds transform back process
  2. vis.py: change these joint[0]s and joint[1]s to local variables

I'm unable to understand point-2. Can you provide your vis.py?

moranxiachong commented 3 years ago

fixed vis bugs:

  1. remove preds transform back process
  2. vis.py: change these joint[0]s and joint[1]s to local variables

I'm unable to understand point-2. Can you provide your vis.py?

all codes like this: joint[0] = x width + padding + joint[0] joint[1] = y height + padding + joint[1]

modified: joint_0 = x width + padding + joint[0] joint_1 = y height + padding + joint[1]

and ofcourse codes related: cv2.circle(ndarr, (int(joint_0), int(joint_1)), 2 ...

point-2 fixed the bug occurred only when multi debug save_image switches opened

AstitvaSri commented 3 years ago

fixed vis bugs:

  1. remove preds transform back process
  2. vis.py: change these joint[0]s and joint[1]s to local variables

I'm unable to understand point-2. Can you provide your vis.py?

all codes like this: joint[0] = x width + padding + joint[0] joint[1] = y height + padding + joint[1]

modified: joint_0 = x width + padding + joint[0] joint_1 = y height + padding + joint[1]

and ofcourse codes related: cv2.circle(ndarr, (int(joint_0), int(joint_1)), 2 ...

point-2 fixed the bug occurred only when multi debug save_image switches opened

@moranxiachong Thanks. It worked. val_400_pred

chiraq440 commented 3 years ago

fixed vis bugs:

  1. remove preds transform back process
  2. vis.py: change these joint[0]s and joint[1]s to local variables

I'm unable to understand point-2. Can you provide your vis.py?

all codes like this: joint[0] = x width + padding + joint[0] joint[1] = y height + padding + joint[1] modified: joint_0 = x width + padding + joint[0] joint_1 = y height + padding + joint[1] and ofcourse codes related: cv2.circle(ndarr, (int(joint_0), int(joint_1)), 2 ... point-2 fixed the bug occurred only when multi debug save_image switches opened

@AstitvaSri Unable to understand point-1?

chiraq440 commented 3 years ago

fixed vis bugs:

1. remove preds transform back  process

2. vis.py: change these joint[0]s and joint[1]s  to local variables

@moranxiachong Unable to understand point 1.

Gzzgz commented 1 year ago
企业微信截图_e9b14f80-d4bd-401e-bdae-3079aa754fed

how to solve it ?

Dieselmarble commented 1 year ago

@Gzzgz Are you able to solve this issue? It seems the key-points are affine transformed.

Dieselmarble commented 1 year ago

Seems this works:

In vis.py

scaling_factor_x = ndarr.shape[0]/96
scaling_factor_y = ndarr.shape[1]/72/nrow

joint_x = x * width + padding + joint[0]*scaling_factor_x
joint_y = y * height + padding + joint[1]*scaling_factor_y