facebookresearch / InterHand2.6M

Official PyTorch implementation of "InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image", ECCV 2020
Other
687 stars 91 forks source link

Question on annotations #76

Open anjugopinath opened 3 years ago

anjugopinath commented 3 years ago

I want to train Internet on a different image dataset. But, the information on camera parameters is not available. Is it possible to proceed?

mks0601 commented 3 years ago

Which annotation information do you have?

anjugopinath commented 3 years ago

I have the bounding box information. Today, the group maintaining the dataset also shared the link with camera intrinsic parameters information [https://argus.web.unc.edu/camera-calibration-database/] This is for a set of GoPro cameras which was used to record the videos/images.

anjugopinath commented 3 years ago

From the above link, is it possible to obtain the following parameters required by camera.json? [x,y,z] (camera position) 3x3 list (camera rotation matrix) [focal_x, focal_y] (focal length of x and y axis) [princpt_x, princpt_y] (principal point of x and y axis)

Also, how can one obtain 3D joint coordinates in the world coordinate system required by joint3d.json? I have only RGB images.

mks0601 commented 3 years ago

You cannot obtain true 3D data from a single RGB image. Please see multi-view geometry theory.

anjugopinath commented 3 years ago

Meaning, to train Internet, can I only use images with ground truth annotations already present? I would like to train Internet on epic kitchens . [https://epic-kitchens.github.io/2021]

mks0601 commented 3 years ago

Yes you need 3D annotations. You can obtain 3D pseudo-annotations from 2D annotations using SMPLify-X.