mks0601 / V2V-PoseNet_RELEASE

Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
https://arxiv.org/abs/1711.07399
MIT License
377 stars 69 forks source link

Evaluation: cross-validation strategy #56

Closed canuck6ix closed 4 years ago

canuck6ix commented 4 years ago

Hi @mks0601, I have a question regarding cross-validation: According to your paper, the 'leave-one-subject-out' strategy was used for MSRA cross-validation. I was wondering what strategy do you use for NYU and Hands17 datasets, thank you!

mks0601 commented 4 years ago

Hi, there are pre-defined data split such as training/testing set for those datasets. I used the pre-defined data split.

canuck6ix commented 4 years ago

Thanks. I couldn't find the data split for Hands17. Can you point me to it please?

mks0601 commented 4 years ago

Read here. http://icvl.ee.ic.ac.uk/hands17/challenge/ You should send mail to organizers.

canuck6ix commented 4 years ago

Will do thanks. Thought it might have been published before!

canuck6ix commented 4 years ago

Hi, there are pre-defined data split such as training/testing set for those datasets. I used the pre-defined data split.

Sorry to bug you again Moon. I just heard back from the challenge organizers that the selection of validation data was left up to the participants. Can you please tell me if you used validation at all? If so which images in the training data (957K) did you use for validation? Thanks.

Please note I'm not asking about evaluation which is done on the challenge servers (seen/unseen/occluded)

mks0601 commented 4 years ago

I didn't make additional validation set from the training set. You can adjust hyperparameters using test set of this dataset or other datasets.