Open maddyonline opened 4 years ago
Thanks for your reply. I tried as you recommended but ran into a problem. Specifically, I tried running the following:
import smplx
model = smplx.create(model_path, model_type='smpl')
output = model(betas=torch.Tensor(a), body_pose=torch.Tensor(b).reshape(1, 24, 9), pose2rot=False)
here a
and b
are smpl_shape
and smpl_pose
respectively. I reshaped tensor b
based on this comment. The original shapes are as follows.
>>> a.shape
(1, 10)
>>> b.shape
(1, 24, 3, 3)
I get the following error (which is probably a shapes mismatch issue here). Any recommendations on a simplest working example with smplx
installed? Really appreciate your time!
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/smplx/body_models.py", line 364, in forward
full_pose = torch.cat([global_orient, body_pose], dim=1)
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 2 and 3 at /pytorch/aten/src/TH/generic/THTensor.cpp:603
Hey are there any updates on this? Did you figure out a way to extract the 3D position of each of the joints?
I am interested in obtaining joints from the inferred SMPL image and visualize it similar to described in README of this project: https://github.com/gulvarol/smplpytorch.
I changed https://github.com/nkolot/GraphCMR/blob/4e57dca4e9da305df99383ea6312e2b3de78c321/demo.py#L118 to
pred_vertices, pred_vertices_smpl, pred_camera, smpl_pose, smpl_shape = model(...)
to getsmpl_pose
(of shapetorch.Size([1, 24, 3, 3])
). Then I just flattened it by doingsmpl_pose.cpu().data.numpy()[:, :, :, -1].flatten('C').reshape(1, -1)
and used the resulting(1, 72)
pose params as input in pose_params variable of smplpytorch demo.The resulting visualization doesn't look correct to me. Is this the right approach? Perhaps there is an easier way to do what I am doing.