Closed keemoonjang closed 1 year ago
Thanks for your interest.
In Hand4Whole github repository, there are two branches: whole-body task[code]. and body-only task[code].
For running demo, you should follow demo file body-only codes in [here]
In detail, you can easily obtain json file for running ClothWild, by changing parts of demo codes as below: https://github.com/mks0601/Hand4Whole_RELEASE/blob/Pose2Pose/demo/body/demo_body.py#L89-L92
# save SMPL parameters
smpl_pose = out['smpl_pose'].detach().cpu().numpy()[0]
smpl_shape = out['smpl_shape'].detach().cpu().numpy()[0]
cam_trans = out['cam_trans'].detach().cpu().numpy()[0]
with open('smpl_param.json', 'w') as f:
json.dump({ 'smpl_param':
{'pose': smpl_pose.reshape(-1).tolist(), 'shape': smpl_shape.reshape(-1).tolist(), 'trans': cam_trans.reshape(-1).tolist()},
'cam_param':
{'focal': focal, 'princpt': princpt}
}, f)
For the second question, you should download the pre-trained weight for 'body-only' in [here] https://github.com/mks0601/Hand4Whole_RELEASE/tree/Pose2Pose#quick-demo].
Thank you.
Thank you for answering! Problems were solved following your instructions.
Adding from here, I came up with some follow-up questions:
Thank you once again for sharing this amazing project :)
My answer to your question is as follows:
Current codes can only cover body-only model SMPL because SMPLicit module of our framework is designed based on SMPL model.
As you mentioned, our framework cannot reconstruct several cloth types, because our model is upper-bounded on cloth generative model; It is one of the limitations of our work.
ClothWild estimates gender, so based on the estimated gender, ClothWild reconstructs a clothed human through a gender-specific model.
I hope these answers will be helpful to you.
@hygenie1228 Thank you for your kind answers! I could get the state-of-the-art 3d reconstruction human models from single in the wild images. Looking forward to seeing the next steps (if any) with full body and more cloth types being accurately reconstructed :)
Hello, thank you for sharing this intriguing work!
I have tried to follow the quick demo as described. While going through "Prepare SMPL parameter, as pose2pose_result.json. You can get the SMPL parameter by running the off-the-shelf method [code]." and trying to use the .json results for ClothWild input, I run into a key error.
My pose2pose results contain keys such as _smplx_root_pose, smplx_body_pose, smplxshape and so forth, while ClothWild sample input seems to require pose, shape, trans and focal, princpt under _smplparam and _camparam, respectively.
Another thing I find confusing is that there are two versions of SMPL pretrained weights _snapshot6.pth.tar. I assume one is body-only and the other is full-body weights, but saved with a same filename. I want to know how I should prepare the settings and match the parameters to successfully get results for ClothWild.
Thank you!