xingyizhou / DeepModel

Code repository for Model-based Deep Hand Pose Estimation
GNU General Public License v3.0
111 stars 43 forks source link

uvd convert to xyz coordination #12

Open hellojialee opened 7 years ago

hellojialee commented 7 years ago

Hi~ Thank you for your kind and great open source code. I have some issues in these following code:

xstart = int(math.floor((u * d / fx - cube_size / 2.) / d * fx))
xend = int(math.floor((u * d / fx + cube_size / 2.) / d * fx))
ystart = int(math.floor((v * d / fy - cube_size / 2.) / d * fy))
yend = int(math.floor((v * d / fy + cube_size / 2.) / d * fy))

in your code, fx = 588.03 fy = 587.07

And I know fu,fv is decided because the original depth image is 640*480 pixel.

Are fx and fy decided by the camera? Where and How can I get them according to the NYU hand pose dataset? I'm a green hand and I haven't figure out the transforming between xyz and uvd coordination. Could you pleas give me some help? Thank you!

hellojialee commented 7 years ago

Another issue: Is the joint xyz coordination in get h5dy data code also normalized to [-1,1] in your code just as the depth do?

xingyizhou commented 7 years ago

Hi USTClj, Yes, fx and fy are decided by the camera. It is obtained in convert_xyz_to_uvd.m file in the official NYU hand dataset. And yes, x, y, and z are all normalized to [-1, 1] as training target.