BaowenZ / Two-Hand-Shape-Pose

Interacting Two-Hand 3D Pose and Shape Reconstruction from Single Color Image (ICCV 2021)
Other
98 stars 6 forks source link

train and test split for interhand dataset #1

Open shreyashampali opened 3 years ago

shreyashampali commented 3 years ago

Hi, thanks for the nice work and making the code public. Could you please share the information about the train and the test split used in the paper for the interhand dataset (141,497 and 125,689 frames)? Are these frames from V0 or V1 version of the dataset and if they belong to H/M/H+M settings of the dataset. I couldn't find this information in the supp. mat. as well. thanks in advance!

regards, Shreyas

MengHao666 commented 3 years ago

Hi, thanks for the nice work and making the code public. Could you please share the information about the train and the test split used in the paper for the interhand dataset (141,497 and 125,689 frames)? Are these frames from V0 or V1 version of the dataset and if they belong to H/M/H+M settings of the dataset. I couldn't find this information in the supp. mat. as well. thanks in advance!

regards, Shreyas

I have similar doubts about that. @shreyashampali @BaowenZ

nllpncpllpn commented 3 years ago

@BaowenZ I have the same concern. It seems that the dataset split does not follow the original Interhand2.6M dataset. Besides, I am also concerned about the evaluation metric, according to sec 3.5 of the paper To achieve scaleinvariant shape estimation, we normalize the distance from the middle finger MCP joint to the wrist joint to 1. During the testing stage, we used the ground truth bone lengths of the two hands to recover their scales. Does this imply that the authors use additional ground-truth in evaluation. The InterNet does not do this and I think it is unfair to directly compare the method with InterNet by using additional ground-truth during evaluation.