zhangboshen / A2J

Code for paper "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image". ICCV2019
MIT License
286 stars 46 forks source link

Enquiry about drawing human 3D pose on our one depth image #66

Open Kevinous opened 7 months ago

Kevinous commented 7 months ago

Hi, thanks for your significant work. I am trying to use the model of itoa_side to predict the human joint key point on our data (e.g., 320*240 depth image from kinect), but I am confused about how to transform the output to readable pixel value and depth value.

zhangboshen commented 7 months ago

The pixel2world or world2pixel conversion for main datasets can be found here: https://github.com/mks0601/V2V-PoseNet_RELEASE/blob/master/vis/nyu/pixel2world.py