Closed Xzy765039540 closed 8 months ago
Hi, you need set pose_op_start_iter=10 (10 epochs for initial training) to let pose refinement start.
Hi, you need set pose_op_start_iter=10 (10 epochs for initial training) to let pose refinement start.
Thanks for the reply. After I tried to modify pose_op_start_iter=10 it is still training very fuzzy, I have compared the processed data and found that only smpl makes a difference. May I ask what method you are using to estimate smpl? Kind regards
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.
thanks,I will try it
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.
thanks,I will try it
I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.
thanks,I will try it
I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry Yes, I also found this problem Also I found that the pose obtained with the ReFit project includes hand information I compared the pose from the People Snapshot Dataset and found that the matrices are exactly the same If you train with the pose obtained from ReFit you might get better results.
Just the same as you. I directly employ the preprocessing scripts in InstantAvatar. I will run the code again to find out whether i miss something or not.
By comparing the result of instantavatar preprocessing and your demo data, I found that instantavatar uses romp to estimate the smpl. romp will not have hand pose in the prediction result (the author fills it with zeros), but your demo data does have hand pose. I would like to ask how to get the hand pose
We use ProxyCap to estimate SMPL-X, but it is not open source yet. I recommend you to use PyMAF-X.
thanks,I will try it
I know why the results are unclear, you need to set lpips_start_iter=10-30 (when to use lpips loss). Sorry Yes, I also found this problem Also I found that the pose obtained with the ReFit project includes hand information I compared the pose from the People Snapshot Dataset and found that the matrices are exactly the same If you train with the pose obtained from ReFit you might get better results.
Thanks!
Very great project!!! I used InstantAvatar's preprocessing method to get data similar to People Snapshot Dataset with simple modifications can be trained in your project, but my trained characters are very unclear and hands can't be displayed. I would like to ask what causes this and when the custom data script will be released? Regards.