mks0601 / V2V-PoseNet_RELEASE

Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
https://arxiv.org/abs/1711.07399
MIT License
377 stars 69 forks source link

Pre-computed centers #15

Closed rsluo closed 6 years ago

rsluo commented 6 years ago

I want to run V2V for hand pose estimation on some RGBD data that I have, but I don't have any ground truth labels. What exactly is the format of the center_trainset and center_testset text files, and how did you get that information? Thanks!

mks0601 commented 6 years ago

It is mentioned in the README.MD.

The precomputed centers are obtained by training the hand center estimation network from DeepPrior++ . Each line represents 3D world coordinate of each frame. In case of ICVL, NYU, MSRA dataset, if depth map is not exist or not contain hand, that frame is considered as invalid. In case of ITOP dataset, if 'valid' variable of a certain frame is false, that frame is considered as invalid. All test images are considered as valid.

rsluo commented 6 years ago

Thanks! I guess I missed that earlier.

One more question - do you have code for visualizing the results?

mks0601 commented 6 years ago

Sorry for late reply. I used plot and plot3 function in MATLAB. The visualization code wil be uploaded soon.

rsluo commented 6 years ago

Ok, thanks!

mks0601 commented 6 years ago

I have just uploaded visualization code. You can check the code and instruction in the README.MD

rsluo commented 6 years ago

Thank you!