Closed Tiandishihua closed 5 months ago
Hi, thanks for your interest. The following things should be prepared if you hope to reconstruct 3D humans from your custom videos. 1). Estimate the SMPL and camera parameters using the single-view-based SMPL estimation methods or video-based SMPL estimation methods, 2). Get the mask of human performers in the video. Then you can use our method to reconstruct 3D humans.
@skhu101 Hi, thanks for this wonderful work. I'm also interested in running GauHuman on customized datasets. I tried methods like mmhuman3d to generate SMPL parameters. However, it seems that the parameters format are quite different from those in ZJU-Mocap and MonoCap dataset(eg. size of poses). I wonder if you could give some advice on how to modify the dataset loaders or transfer the parameter format. Thanks!
Hi, could you show the specific difference between these two formats of SMPL parameters?
@skhu101 Thanks for your reply. Definitely, here are the parameter of these two formats:
For the output of the mmhuman3d model, the parameters are:
'body_pose'
: which seems to correspond to the param 'poses'
in the SMPL model in the datasets. However, the shape is (1,23)
in mmhuman3d output compared to (1,72)
in the datasets (for one human).'global_orient'
which seems to correspond to the params 'Rh'
indicating the rotation of the SMPL model in the datasets.'betas'
which seems to correspond to the params 'shapes'
in the SMPL model with the same shape of '(1,10)'
'Th'
in the SMPL model in the datasets.I think there are two main differences:
I'll try to modify the dataloaders myself and look forward to your further reply.
@Ramseyous0109 Hi, how did you run MMHuman3D to obtain the parameters?
@caizhongang Thanks for your reply. I just ran the demo/estimate_smpl.py.
@caizhongang Thanks for your reply. I just ran the demo/estimate_smpl.py.
Hi @Ramseyous0109 , the script should have output body_pose
in shape (-1, 23, 3) as in this line. Can you try the following snippet:
import numpy as np
content = np.load('output.npz', allow_pickle=True)
body_pose = content['smpl'].item()['body_pose']
print(body_pose.shape)
@caizhongang Sorry for making this shape wrong. The shape of body_pose
is actually (1,23,3)
.
Then is there a way to transfer it into the shape needed by GauHuman?
@caizhongang Sorry for making this shape wrong. The shape of
body_pose
is actually(1,23,3)
. Then is there a way to transfer it into the shape needed by GauHuman?
@Ramseyous0109 , since the target shape is (1,72)
, my best guess is to reshape body_pose
to (1, 69)
and concatenate global orient (shape is (1, 3)
): np.concatenate([global_orient, body_shape]), axis=-1)
.
@caizhongang Thanks so much. I'll try this way.
@Ramseyous0109 hello! Did you make the data set correctly? If so, can I expose your script? Thank you so much:>
@JiatengLiu Hi, I think you can just follow the steps above to match the parameter keys to generate scripts for customized dataset.
yes, I have finished it.
@JiatengLiu Hi, I think you can just follow the steps above to match the parameter keys to generate scripts for customized dataset.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
How do I make my own video dataset?