facebookresearch / InterHand2.6M

Official PyTorch implementation of "InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image", ECCV 2020
Other
676 stars 92 forks source link

the prediction is not good #10

Closed lakehui closed 3 years ago

lakehui commented 3 years ago

Hi, I run the testing script to test the interHand2.6M's test-set. but the prediction of hand keypoints is very bad. the testing command is follow as python3.5 test.py --gpu 0 --test_epoch 20 --test_set test --annot_subset all Is there anything I might have done wrong?

lakehui commented 3 years ago

out_0_right out_20_interacting

mks0601 commented 3 years ago

out_14_interacting This is my prediction and I exactly followed the README instruction. Could you show me your bash commands and results as well?

lakehui commented 3 years ago

the bash commands: python3.5 test.py --gpu 0 --test_epoch 20 --test_set test --annot_subset all the model is downloaded from the [InterHand2.6M v0.0]] link, and I chose InterHand2.6M_all model to test.

the changed code as follow:

  1. modify vis=True in dataset.py
  2. modify self.img_path & self.annot_path to myself path.
  3. add code self.datalist = self.datalist[::10000] to interval select sample because testing whole test-set spend too much time.
  4. modily test_batch_size = 8 in config.py the rest of code remain unchanged.

I think these changed shouldn't impact the prediction. but ... some results as follows: out_26_3d out_26_interacting

mks0601 commented 3 years ago

Hmm this is pretty weird.. Could you test again with models trained on full IH2.6M? I'll test again using models trained on IH2.6M v0.0

lakehui commented 3 years ago

get similar result by models of full IH2.6M

out_26_interacting

mks0601 commented 3 years ago

The result seems like the code did not load a model and just use randomly initialized weights. Could you give me the whole bash message and double check you put the pre-trained model to the right place (output/model_dump)?

lakehui commented 3 years ago

`>>> Using GPU: 0 09-16 15:01:57 Creating test dataset... Load annotation from /media/data_3t/data/hand_related/interhand2.6M/InterHand2.6M/annotations/all loading annotations into memory... Done (t=8.09s) creating index... index created! Get bbox and root depth from ../data/InterHand2.6M/rootnet_output/rootnet_interhand2.6m_output_all_test.json Number of annotations in single hand sequences: 197992 Number of annotations in interacting hand sequences: 154905 09-16 15:02:31 Load checkpoint from /home/huhui/github_demo/InterHand2.6M/main/../output/model_dump/snapshot_20.pth.tar 09-16 15:02:31 Creating graph... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:02<00:00, 1.79it/s]

Evaluation start... /home/huhui/.local/lib/python3.5/site-packages/matplotlib/pyplot.py:514: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam figure.max_open_warning). max_open_warning, RuntimeWarning) Handedness accuracy: 0.21875 MRRPE: 170.0613021850586

MPJPE for each joint: r_thumb4: 137.19, r_thumb3: 129.61, r_thumb2: 89.98, r_thumb1: 52.77, r_index4: 181.15, r_index3: 168.56, r_index2: 132.38, r_index1: 102.33, r_middle4: 181.62, r_middle3: 166.76, r_middle2: 145.15, r_middle1: 114.99, r_ring4: 179.53, r_ring3: 173.68, r_ring2: 152.13, r_ring1: 114.38, r_pinky4: 172.50, r_pinky3: 157.58, r_pinky2: 143.62, r_pinky1: 111.21, r_wrist: 0.00, l_thumb4: 126.22, l_thumb3: 96.72, l_thumb2: 94.01, l_thumb1: 64.37, l_index4: 180.41, l_index3: 149.07, l_index2: 133.51, l_index1: 97.97, l_middle4: 187.62, l_middle3: 159.35, l_middle2: 133.43, l_middle1: 103.37, l_ring4: 166.88, l_ring3: 143.26, l_ring2: 126.90, l_ring1: 89.67, l_pinky4: 138.39, l_pinky3: 139.20, l_pinky2: 142.93, l_pinky1: 80.11, l_wrist: 0.00, MPJPE for all hand sequences: 127.63

MPJPE for each joint: r_thumb4: 136.87, r_thumb3: 120.25, r_thumb2: 74.02, r_thumb1: 43.33, r_index4: 164.27, r_index3: 141.45, r_index2: 117.04, r_index1: 86.77, r_middle4: 183.47, r_middle3: 158.92, r_middle2: 135.18, r_middle1: 101.21, r_ring4: 178.50, r_ring3: 157.69, r_ring2: 139.53, r_ring1: 99.74, r_pinky4: 162.85, r_pinky3: 141.69, r_pinky2: 138.82, r_pinky1: 94.52, r_wrist: 0.00, l_thumb4: 114.92, l_thumb3: 94.34, l_thumb2: 89.27, l_thumb1: 68.11, l_index4: 159.72, l_index3: 133.43, l_index2: 130.42, l_index1: 98.75, l_middle4: 183.97, l_middle3: 156.91, l_middle2: 127.92, l_middle1: 84.19, l_ring4: 146.85, l_ring3: 147.83, l_ring2: 115.27, l_ring1: 89.57, l_pinky4: 134.48, l_pinky3: 123.14, l_pinky2: 116.63, l_pinky1: 87.17, l_wrist: 0.00, MPJPE for single hand sequences: 118.55

MPJPE for each joint: r_thumb4: 137.44, r_thumb3: 136.81, r_thumb2: 102.26, r_thumb1: 61.35, r_index4: 194.14, r_index3: 189.42, r_index2: 144.17, r_index1: 114.30, r_middle4: 179.93, r_middle3: 172.79, r_middle2: 152.82, r_middle1: 125.59, r_ring4: 180.57, r_ring3: 185.98, r_ring2: 161.82, r_ring1: 125.64, r_pinky4: 181.27, r_pinky3: 169.80, r_pinky2: 147.31, r_pinky1: 124.05, r_wrist: 0.00, l_thumb4: 134.92, l_thumb3: 98.55, l_thumb2: 97.66, l_thumb1: 59.68, l_index4: 194.74, l_index3: 161.09, l_index2: 135.88, l_index1: 97.37, l_middle4: 190.61, l_middle3: 161.19, l_middle2: 137.66, l_middle1: 118.12, l_ring4: 184.90, l_ring3: 140.10, l_ring2: 134.95, l_ring1: 89.75, l_pinky4: 140.80, l_pinky3: 150.32, l_pinky2: 161.15, l_pinky1: 74.69, l_wrist: 0.00, MPJPE for interacting hand sequences: 134.56`

mks0601 commented 3 years ago

Could you set trans_test = 'gt' # gt, rootnet at config.py and test again?

lakehui commented 3 years ago

emm.. it doesn't work. out_26_interacting

mks0601 commented 3 years ago

Could you download the annotation files and codes again? There is no problem on my side :(

lakehui commented 3 years ago

I just download these codes today. and I have check the annotation files, it is also right. I am trying to train a model.

kingsman0000 commented 3 years ago

`>>> Using GPU: 0 09-16 15:01:57 Creating test dataset... Load annotation from /media/data_3t/data/hand_related/interhand2.6M/InterHand2.6M/annotations/all loading annotations into memory... Done (t=8.09s) creating index... index created! Get bbox and root depth from ../data/InterHand2.6M/rootnet_output/rootnet_interhand2.6m_output_all_test.json Number of annotations in single hand sequences: 197992 Number of annotations in interacting hand sequences: 154905 09-16 15:02:31 Load checkpoint from /home/huhui/github_demo/InterHand2.6M/main/../output/model_dump/snapshot_20.pth.tar 09-16 15:02:31 Creating graph... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:02<00:00, 1.79it/s]

Evaluation start... . . .

1.How can you let the number under Creating graph...only 5?I exactly followed the README instruction. And follow is my bash message....(I'm 5759)It take me a lot of time to run test.py every time

(Hand2.6M) kingsman@kingsman-lab:~/InterHand2.6M-master_new/main$ python test.py --gpu 0 --test_epoch 20 --test_set val --annot_subset machine_annot

Using GPU: 0 09-16 20:05:27 Creating val dataset... Load annotation from ../data/InterHand2.6M/annotations/machine_annot loading annotations into memory... Done (t=3.78s) creating index... index created! Get bbox and root depth from groundtruth annotation Number of annotations in single hand sequences: 113370 Number of annotations in interacting hand sequences: 70917 09-16 20:05:53 Load checkpoint from /home/kingsman/InterHand2.6M-master_new/main/../output/model_dump/snapshot_20.pth.tar 09-16 20:05:53 Creating graph... 100%|███████████████████████████████████████| 5759/5759 [14:19<00:00, 6.70it/s]

Evaluation start...

2.After set vis=True I only see the result for single hand .What should I do for visualize two hand?Following is my output Screenshot from 2020-09-16 20-35-51

mks0601 commented 3 years ago

At here, you can select the test hand sequence. Currently, datalist consists of single hand and interacting hand images.

If you want to test only on interacting hand images, do self.datalist = self.datalist_ih.

lakehui commented 3 years ago

emm.. I have solved my bug. I finally find it is my torchvision problem. I maybe modified it at a long time ago.

mks0601 commented 3 years ago

good for you!