nkolot / GraphCMR

Repository for the paper "Convolutional Mesh Regression for Single-Image Human Shape Reconstruction"
BSD 3-Clause "New" or "Revised" License
425 stars 67 forks source link

How to get 3d joints from demo.py and visualize it #36

Open maddyonline opened 4 years ago

maddyonline commented 4 years ago

I am interested in obtaining joints from the inferred SMPL image and visualize it similar to described in README of this project: https://github.com/gulvarol/smplpytorch.

I changed https://github.com/nkolot/GraphCMR/blob/4e57dca4e9da305df99383ea6312e2b3de78c321/demo.py#L118 to pred_vertices, pred_vertices_smpl, pred_camera, smpl_pose, smpl_shape = model(...) to get smpl_pose (of shape torch.Size([1, 24, 3, 3])). Then I just flattened it by doing smpl_pose.cpu().data.numpy()[:, :, :, -1].flatten('C').reshape(1, -1) and used the resulting (1, 72) pose params as input in pose_params variable of smplpytorch demo.

The resulting visualization doesn't look correct to me. Is this the right approach? Perhaps there is an easier way to do what I am doing.

nkolot commented 4 years ago

So the output of the network is 24 3x3 rotation matrices. You need to convert the rotation matrices to axis-angle first (you can find documentation for that online).

Alternatively you can use SMPL from this repo and set pose2rot=False when passing in the parameters to the SMPL model here.

maddyonline commented 4 years ago

Thanks for your reply. I tried as you recommended but ran into a problem. Specifically, I tried running the following:

import smplx
model = smplx.create(model_path, model_type='smpl')
output = model(betas=torch.Tensor(a), body_pose=torch.Tensor(b).reshape(1, 24, 9), pose2rot=False)

here a and b are smpl_shape and smpl_pose respectively. I reshaped tensor b based on this comment. The original shapes are as follows.

>>> a.shape
(1, 10)
>>> b.shape
(1, 24, 3, 3)

I get the following error (which is probably a shapes mismatch issue here). Any recommendations on a simplest working example with smplx installed? Really appreciate your time!


    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/smplx/body_models.py", line 364, in forward
    full_pose = torch.cat([global_orient, body_pose], dim=1)
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 2 and 3 at /pytorch/aten/src/TH/generic/THTensor.cpp:603
anuj018 commented 1 year ago

Hey are there any updates on this? Did you figure out a way to extract the 3D position of each of the joints?