Open xiangjun-xj opened 3 weeks ago
Hi,
For XHumans dataset, I emperically found that not optimizing SMPL-X parameters gives better results, so I do not optimize it. I guess that is because SMPL-X parameters of XHumans are from multi-view loss, while optimization on ExAvatar's side is done from single-view loss. However, we need joint_offset and face_offset. Hence, we pick a single sequence from the training set to get them, available in here.
For XHuamns dataset, you can ignore any Custom dataset-related stuff. For the rendering, you can use the SMPL-X parameters, provided in the dataset. I use cameras.npz like here.
Wow, thanks for so quick response. What's exact command to get joint_offset and face_offset from a single sequence? Sorry, I did not find it following your link. Run fit.py using config.py/XHumans.py in XHumans branch?
You can see this: https://github.com/mks0601/ExAvatar_RELEASE/tree/main/fitting#xhumans-videos set dataset='XHumans' in config,py and run python fit.py --subject_id 00034
From Here, it seems that you don't optimize the smplx params of XHumans, so does it mean that we don't need to run deca, Hand4whole and fit.py for XHumans dataset and directly use pkl files in original XHumans dataset like here
Also, I'd like to know the camera Rt you use for XHumans dataset. For example, in your provided examples like 00034, to render smplx mesh, we should use the smplx params in "smplx_optimized/smplx_params/*.json", right? What's the corresponding cameras? Rt in cameras.npz or we should use virtual camera parameters like in Custom dataset? It seems that you don't use Rt of cameras.npz after using it in here.