Closed xyIsHere closed 11 months ago
Even I faced the same issue with my own data. Trying to figure out why is this the case..
It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well
Even I faced the same issue with my own data. Trying to figure out why is this the case..
It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well
I still cannot figure it out. Do you know other pose estimation method that might fix this issue?
Even I faced the same issue with my own data. Trying to figure out why is this the case.. It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well
I still cannot figure it out. Do you know other pose estimation method that might fix this issue?
Did you used the weak perspective camera transformation for custom videos? If not , please try. And then train the model.
Even I faced the same issue with my own data. Trying to figure out why is this the case.. It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well
I still cannot figure it out. Do you know other pose estimation method that might fix this issue?
Did you used the weak perspective camera transformation for custom videos? If not , please try. And then train the model.
Yes, I did. I used the VIBE to compute the smpl parameters and the camera parameters and got those results. I saw your discussion in https://github.com/chungyiweng/humannerf/issues/65. I think you are right that this might caused by the Rh and Th. But this problem cannot be solved by using weak perspective camera. Do you think that the camera pose and smpl parameters can be decoupled?
Even I faced the same issue with my own data. Trying to figure out why is this the case.. It maybe due to the Rh or Th values of the smpl. If you figured it out, let me know as well
I still cannot figure it out. Do you know other pose estimation method that might fix this issue?
Did you used the weak perspective camera transformation for custom videos? If not , please try. And then train the model.
Yes, I did. I used the VIBE to compute the smpl parameters and the camera parameters and got those results. I saw your discussion in #65. I think you are right that this might caused by the Rh and Th. But this problem cannot be solved by using weak perspective camera. Do you think that the camera pose and smpl parameters can be decoupled?
There's another method which you can try, it's from Easymocap ( the repository which gave Zju-mocap dataset )
There you can run motion capture on your data and get the accurate smpl. Just before training on humannerf transform the matrix as you did here and see if it improves.
I am using That Repo for both mono and multi smpl. Works great.
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git
They did NERF just on Snapshot dataset.
Ok, I want to train HumanNeRF with Snap_shot dataset
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git They did NERF just on Snapshot dataset.
Ok, I want to train HumanNeRF with Snap_shot dataset
I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git They did NERF just on Snapshot dataset.
Ok, I want to train HumanNeRF with Snap_shot dataset
I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train
I get it. Thank you!
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git They did NERF just on Snapshot dataset.
Ok, I want to train HumanNeRF with Snap_shot dataset
I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train
I had an error using Anim-NeRF. Then I used Easy_mocap. Can I just use these parameters directly? How should I modify them? and I find another pose in other file, which file should I choose?
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue.
Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
I have successfully used Easymocap and train the dataset, thank you very much!
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
Oh, thank you very much. Could you tell me what axis you use in People_snapshot? Can you elaborate on what is an axis of such monocular videos?
If you particularly looking to work on people_snapshot, check this repo https://github.com/JanaldoChen/Anim-NeRF.git They did NERF just on Snapshot dataset.
Ok, I want to train HumanNeRF with Snap_shot dataset
I mean to say, u can check the way they generated smpl for snapshot and try the same then use HumanNeRF to train
I had an error using Anim-NeRF. Then I used Easy_mocap. Can I just use these parameters directly? How should I modify them? and I find another pose in other file, which file should I choose?
Always use the output-smpl-3d one. That's the final SMPL values
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
I have successfully used Easymocap and train the dataset, thank you very much!
Great to hear that!! Good luck on your work
I am using That Repo for both mono and multi smpl. Works great.
Thanks a lot! I will have a try with Easymocap. By the way, is this the instruction (https://chingswy.github.io/easymocap-public-doc/develop/02_fitsmpl.html) that you followed to get the accurate?
Hello, have you used Easy_mocap to predict smpl parameters and successfully run it on humannerf @xyIsHere
I used Easymocap and trained 3 dataset. People snapshot, Humam3.6m and random YouTube ones
is the smpl patameters of People_snapshot estimated from Easymocap correct?
The parameters generated are correct, one just has to make sure about the axis of such monocular videos. Because during rendering the result maybe like the one you saw at the top of this issue. Anyways the axis of Original SMPL and the EasyMocap Smpl are different. Please remember that
I have successfully used Easymocap and train the dataset, thank you very much!
Great to hear that!! Good luck on your work
I find that the predicted smpl parameters of Easymocap are not completely correct and the result rendering is problematic.
may I know the settings you used for generating the smpl and the front view of the render.
Lastly, is this the trained result, if so how many epoch did you run?
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered.
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl
try to add --mode and set it to some sort of mono.
Please make sure to process the files accurately and also the Mask is really important
Thank you. I'll give it a try and let you know.
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered. python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl try to add --mode and set it to some sort of mono. Please make sure to process the files accurately and also the Mask is really important
Thank you. I'll give it a try and let you know.
have a look at this. I trained it for 165k iterations
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered. python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl try to add --mode and set it to some sort of mono. Please make sure to process the files accurately and also the Mask is really important
Thank you. I'll give it a try and let you know.
have a look at this. I trained it for 165k epochs
This is very good. Is your smpl model generated from Easymocap better than mine?
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered. python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl try to add --mode and set it to some sort of mono. Please make sure to process the files accurately and also the Mask is really important
Thank you. I'll give it a try and let you know.
have a look at this. I trained it for 165k epochs
This is very good. Is your smpl model generated from Easymocap better than mine?
I deleted the smpl results since I focused on multi-view data.
but is it possible for you to share me the processing file you used for snapshot. I can check and see the error if any.
The processing file is simple. path_smpl = '/home/shengbo/EasyMocap-master/data/male-2-sport/output-smpl-3d/smplfull/'+i[:-4]+'/'+str(1000000+int(i[:-4]))[1:]+'.json'
if not os.path.exists(path_smpl):
print(path_smpl)
continue
with open(path_smpl,'r') as f:
data = json.load(f)
print(type(data))
print(data.keys())
poses = np.array(data["annots"][0]['poses'][0], dtype=np.float32)
Rh = np.array(data["annots"][0]['Rh'][0], dtype=np.float32)
Th = np.array(data["annots"][0]['Th'][0], dtype=np.float32)
betas = np.array(data["annots"][0]['shapes'][0], dtype=np.float32)
#K = np.array(cam_body_info['cam_intrinsics'], dtype=np.float32)
#E = np.array(cam_body_info['cam_extrinsics'], dtype=np.float32)
K = np.array( [
[1296.0000000, 0.0000000, 540.0000000],
[0.0000000, 1296.0000000, 540.0000000],
[0.0000000, 0.0000000, 1.0000000]
], dtype=np.float32)
# pose = np.eye(4, dtype=np.float32)
# pose[:3, :3] = R_.transpose()
# pose[:3, 3] = R_.transpose() @ -t_
# c2w = torch.from_numpy(pose[:3, :4]).float()
E = np.array([[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]], dtype=np.float32)
all_betas.append(betas)
##############################################
# Below we tranfer the global body rotation to camera pose
# Get T-pose joints
_, tpose_joints = smpl_model(np.zeros_like(poses), betas)
# get global Rh, Th
#pelvis_pos = tpose_joints[0].copy()
# get refined T-pose joints
#tpose_joints = tpose_joints - pelvis_pos[None, :]
# remove global rotation from body pose
#poses[:3] = 0
# get posed joints using body poses without global rotation
_, joints = smpl_model(poses, betas)
#joints = joints - pelvis_pos[None, :]
mesh_infos[i[:-4]] = {
'Rh': Rh,
'Th': Th,
'poses': poses,
'joints': joints,
'tpose_joints': tpose_joints
}
I just run the "python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl. The result is the novel view. I think maybe to human is fat, so the smpl generated is wrong.
The human is not that fat since I have trained the same subject succesfully. It seems the training is still incomplete since the hands are not rendered. python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-sport/ --work internet" to get smpl try to add --mode and set it to some sort of mono. Please make sure to process the files accurately and also the Mask is really important
Thank you. I'll give it a try and let you know.
have a look at this. I trained it for 165k epochs
This is very good. Is your smpl model generated from Easymocap better than mine?
I deleted the smpl results since I focused on multi-view data.
but is it possible for you to share me the processing file you used for snapshot. I can check and see the error if any.
Is it possible that because I trained so few images, I only used a few images?
use as many images as you can and train it longer. hopefully it will improve
use as many images as you can and train it longer. hopefully it will improve
OK, thank you very much!
use as many images as you can and train it longer. hopefully it will improve
I found out that this was because I chose the wrong pose to render a new perspective. Did you choose a random pose for a frame to render? May I ask if you have tried rendering from a new perspective, rather than just a trained perspective?
use as many images as you can and train it longer. hopefully it will improve
I found out that this was because I chose the wrong pose to render a new perspective. Did you choose a random pose for a frame to render? May I ask if you have tried rendering from a new perspective, rather than just a trained perspective?
like what sort of perspective are you referring to? if you can clarify it a bit and possibly show the result
If you talking about camera perspective, then yes I had rendered it from different perspective
Yes, the camera angle. I chose an smpl that renders images from different camera perspectives. But I've found that it only works well in training perspective. New view rendered below:
Yes, the camera angle. I chose an smpl that renders images from different camera perspectives. But I've found that it only works well in training perspective. New view rendered below:
I tried to use more training images and it worked better
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl.
Could tell me where can I find the correct axis?
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
Hey, I actually asked a few other people under my contact and it seems to be an issue for Monocular Videos itself. Many face such issues. You can refer to this issue I raised there. Hope it helps. https://github.com/wyysf-98/MoCo_Flow/issues/1#issue-1954072829
If it doesn't then try SMPL once with PARE or VIBE and parse as WILD dataset in Humannerf by body-centering around the pelvis joint
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
https://github.com/chungyiweng/humannerf/assets/122679046/d4c43984-d13f-487e-8f6c-9e33ac976d53
https://github.com/chungyiweng/humannerf/assets/122679046/890afb2b-5242-4919-ab60-296ad5fc3be1
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4
male-3-plaza.mp4
Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4 male-3-plaza.mp4 Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full.
are you talking about this?
or did you pass any new arguments in the mocap.py ? Can you inform it?
no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4 male-3-plaza.mp4 Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full. are you talking about this? or did you pass any new arguments in the mocap.py ? Can you inform it?
no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4 male-3-plaza.mp4 Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full. are you talking about this? or did you pass any new arguments in the mocap.py ? Can you inform it?
no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.
I see. It's random. Some smpl are good while some are bad for the same video? That's strange.
Can you tell me the python script your run for mocap.py The entire CLI arguments too
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4 male-3-plaza.mp4 Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full. are you talking about this? or did you pass any new arguments in the mocap.py ? Can you inform it?
no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.
for rendering views from different perspective, you need to check the rendering camera values. Check Freeview.py
the axis of Easymocap smpl is different from original smpl. I made a custom one axis, but it's not the truth axis of Easymocap smpl. Could tell me where can I find the correct axis?
could you provide me with the rendering result? like a small video would be great
male-3-casual.mp4 male-3-plaza.mp4 Rendering results depend on which smpl parameters I choose, some of the smpl parameters predicted by Easymocap have serious errors.
what exactly you mean by different smpl parameters. as per what I remember, when you run mocap.py and passing --write_smpl_full, you get 2 folders in the output, smpl and smpl_full. are you talking about this? or did you pass any new arguments in the mocap.py ? Can you inform it?
no, I don't talk about the smpl_full. I choose one smpl parameter of one image(One of the frames from the video), then I render novel view with this smpl parameter.
I see. It's random. Some smpl are good while some are bad for the same video? That's strange.
Can you tell me the python script your run for mocap.py The entire CLI arguments too
I run
python3 apps/preprocess/extract_keypoints.py /home/shengbo/EasyMocap-master/data/male-2-outdoor/ --mode mp-holistic
python3 apps/demo/mocap.py /home/shengbo/EasyMocap-master/data/male-2-plaza/ --work internet
Dear author,
I rendered a bullet-time effect for an in-the-wild video. But as shown in the video (rendered frames) below, you can find that the human body does not stand up straight on the ground. Do you know the reason for this and how to solve it?
Thanks a lot!
https://github.com/chungyiweng/humannerf/assets/26050719/a760a07b-d6a0-4b51-8958-34925b0a31bf
https://github.com/chungyiweng/humannerf/assets/26050719/275799a7-bcae-4d0e-800c-6449c4b96765