Closed mezbot closed 5 years ago
The demo.lua
code outputs the 3D joint coordinates in the 19x64x64 volume. The z dimension is divided into 19 quantized bins, centering the hip joint at the middle (bin=10). The xy dimensions are aligned with the input image. Therefore, one has to reconstruct to obtain camera coordinates in meters. This is done (e.g., here) prior to SMPL fitting because SMPL is in world coordinates.
FYI, I am not using the training scripts as I am using your pre-trained models. To be clear, could I simply take the 19x64x64 coordinates and use simple geometry to transform these into the smpl world coordiantes, or would I be missing something? I have a feeling that some of the joints are out of order with respect to the joints in the joints3d.mat file but I may be wrong. My reason for this is that when I run the fitting script my output obj files look very contorted, with arms and legs sometimes going inside the body.
Thank you for your help so far.
This part is independent from training. You could use the transformation I linked on the 19x64x64 coordinates, but note that the exact transformation depends on the camera intrinsics. Without this transformation, the fitting would produce garbage.
Thank you! How would I get camera intrinsics for UP images, in particular I would like them for the joecole image?
Also, referring to the transformation you linked, you wouldnt happen to have a more standalone version of this code, as I am trying to run it outside of the eval.lua script and it is very difficult to figure out which parts are required for the transform and which arent (but are used for training)?
Thanks again!
Focal length is given in the UP dataset. The intrinsic computation is also in the code: https://github.com/gulvarol/bodynet/blob/master/training/donkey.lua#L172
Hi @gulvarol , After running the demo.lua script, we get an output .mat file. In this file, when loaded there is a joints3d attribute containing the locations for the 3d joints:
from input.png.mat
joints3D = 9 38 57 10 38 44 10 35 32 10 30 31 5 31 42 11 28 58 10 32 29 11 32 21 13 32 11 13 33 8 16 45 32 14 43 22 12 39 14 12 26 15 10 22 23 15 21 32
However when running fit_up.py it seems that a different .mat file is used, and the values for this are completely different:
from joints3d_84.mat pred =
-0.26532 0.57882 -0.22500 0.01899 0.31334 -0.31500 -0.06901 0.06892 -0.04500 0.00009 0.08879 0.04500 -0.26674 0.38580 0.18000 -0.47273 0.54157 -0.04500 0.00000 0.00000 0.00000 0.06884 -0.21664 -0.04500 0.06825 -0.44151 -0.22500 0.08770 -0.49012 -0.27000 -0.29494 -0.12768 -0.18000 -0.20697 -0.24640 0.00000 -0.08872 -0.37351 -0.13500 0.24515 -0.35353 -0.18000 0.40227 -0.17678 -0.18000 0.33197 -0.08809 -0.36000
Note that I am running the demo.lua script using the default input image (picture of joe cole playing soccer/football), and the joints3d_84.mat file refers to joints for he joe cole image.