aipixel / GaussianAvatar

[CVPR 2024] The official repo for "GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians"
https://huliangxiao.github.io/GaussianAvatar
MIT License
410 stars 29 forks source link

Custom dataset training #6

Closed Xzy765039540 closed 8 months ago

Xzy765039540 commented 8 months ago

Very great project!!! I used InstantAvatar's preprocessing method to get data similar to People Snapshot Dataset with simple modifications can be trained in your project, but my trained characters are very unclear and hands can't be displayed. I would like to ask what causes this and when the custom data script will be released? Regards. 18001_gt 18001_pred

huliangxiao commented 8 months ago

Hi, you need set pose_op_start_iter=10 (10 epochs for initial training) to let pose refinement start.

Xzy765039540 commented 8 months ago

Hi, you need set pose_op_start_iter=10 (10 epochs for initial training) to let pose refinement start.

Thanks for the reply. After I tried to modify pose_op_start_iter=10 it is still training very fuzzy, I have compared the processed data and found that only smpl makes a difference. May I ask what method you are using to estimate smpl? Kind regards 08001_gt 08001_pred

huliangxiao commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

Xzy765039540 commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

huliangxiao commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.

Xzy765039540 commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.

thanks,I will try it

huliangxiao commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.

thanks,I will try it

I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry

Xzy765039540 commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.

thanks,I will try it

I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry Yes, I also found this problem Also I found that the pose obtained with the ReFit project includes hand information I compared the pose from the People Snapshot Dataset and found that the matrices are exactly the same If you train with the pose obtained from ReFit you might get better results.

huliangxiao commented 8 months ago

Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.

By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose

We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.

thanks,I will try it

I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry Yes, I also found this problem Also I found that the pose obtained with the ReFit project includes hand information I compared the pose from the People Snapshot Dataset and found that the matrices are exactly the same If you train with the pose obtained from ReFit you might get better results.

Thanks!