Closed se122811 closed 3 months ago
We utilize the code from our lab for SMPL-X fitting. To the best of my knowledge, an open-sourced alternate is EasyMocap. But it requires you to transform the fitting results into smpl_params.npz
.
大佬,calibration_full.json这个文件是怎么得到?是拍摄的机器导出来的吗 @lizhe00
大佬,calibration_full.json这个文件是怎么得到?是拍摄的机器导出来的吗 @lizhe00
这是标定文件,需要通过标定算法得到。
Understood. Thank you for your kind response.
Just one more question. Did you use multi-view fitting, or did you use single-view fitting for SMPL-X parameters? Or did you do fitting on monocular video like video-based avatars? I would really appreciate your response.
Sincerely,
We use multi-view fitting because single view encounters depth ambiguity and occlusion.
have you tried to train and test your model on the DNA-Rendering dataset?
have you tried to train and test your model on the DNA-Rendering dataset?
We haven't done this.
Thank you for your kindness :)
transform the fitting results into
smpl_params.npz
May I ask how to transform the fitting results into smpl_params.npz?
transform the fitting results into
smpl_params.npz
May I ask how to transform the fitting results into smpl_params.npz?
Hi! I also want to know how to transform, have you solved this? If it's resolved, could you tell me the solution? Thanks.
Here is my method,the “content” is output from EasyMocap. smpl_data = {} smpl_data['betas'] = content['shapes'] smpl_data['global_orient'] = np.zeros((size, 3)) smpl_data['transl'] = np.zeros((size, 3)) smpl_data['body_pose'] = content['poses'][:, :63] smpl_data['jaw_pose'] = content['poses'][:, 63:66] smpl_data['expression'] = np.zeros((size, 10)) smpl_data['left_hand_pose'] = np.zeros((size, 45)) smpl_data['right_hand_pose'] = np.zeros((size, 45)) np.save('output/test_pose_refine.npy', smpl_data)
Is your rendering result correct? I noticed that you have set both 'global_orient' and 'transl' to zero. In the output of easymocap, there are 'rh' and 'th', while smpl_params.npz does not. May I ask how you handled this? Thanks for your reply.
It looks very great! Have you trained with the processed easymocap output? Or are you just using it to infer new postures? Thanks for your reply.
Just using it to infer.
谢谢你的回复。我想知道该如何正确处理并且用来训练,而将'global_orient' and 'transl' 置零的方法可能并不适用。请问这里有人正确使用处理后的easymocap的输出训练了吗?
你有自己拍摄的训练数据吗? @IceFtv
暂时没有,我当前在尝试不使用任何数据集中拟合好的smplx参数,而去使用公开项目(easymocap)的结果。 @shengyuting
我'global_orient' and 'transl' 置零后,是通过导出easymocap的平移旋转矩阵来生成效果的,所以你可以看到我的结果里面人物可以旋转。 @IceFtv
Thank you for your excellent work and providing the codes.
I want to train on our own dataset. For the setting, how can I obtain smpl_params.npz? Should I use PyMAF-X as mentioned in the AvatarRex? If so, how can I tune the parameters to align it to the calibrated camera parameters (K[R|T])? I was curious about it because smpl parameters obtained from PyMAF-X are trained on fixed virtual camera (maybe focal length=(5000,5000) and extrinsic matrices are identity) across all the images.
I would appreciate it if you could answer.