facebookresearch / InterHand2.6M

Official PyTorch implementation of "InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image", ECCV 2020
Other
676 stars 92 forks source link

Inputs required to train new dataset #74

Open anjugopinath opened 2 years ago

anjugopinath commented 2 years ago

Hi,

I want to train InterHand on a new dataset. What are the inputs required?

Thank You, Anju

mks0601 commented 2 years ago

It takes a single image.

anjugopinath commented 2 years ago

Could you also answer the following questions please?

1) Are the bounding box coordinates required for the images? 2) For the ground truth data, should I also annotate all the images in the dataset? image

Thank You, Anju

mks0601 commented 2 years ago
  1. yes
  2. no

please check datasets. all information are available in the dataset homepage.

anjugopinath commented 2 years ago

Thank You. I will check that.

anjugopinath commented 2 years ago

Hi,

Could you answer the below questions please?

image

Questions 1 and 2 are based on the image above.

  1. images and annotations are loaded for all modes - train, val and test. Do the annotations contain the bounding box coordinates?

  2. Also, why is the rootnet path different for the validation dataset?

image

Question 3 is based on the image above.

  1. The image above is the annotations folder. There are 4 types of .json files: camera.json, data.json, joint_3d.json and NeuralAnnot.json. Do I have to generate any of them when training on a new dataset? Or, can I use the same files?

image

The question 4 is based on the image above.

  1. The folders in the image above are from the path InterHand2.6M/data/InterHand2.6M/images/train/Capture0/. The images I have is not differentiated in this manner. Do I have to split them into folders based on the pose of the hand?

And last question,

  1. Which file should contain the bounding box coordinates?

Thank You, Anju