Closed Realdr4g0n closed 3 years ago
Hello, I'm coming into a similar visualize problem on SMPL. It seems there is something wrong with the cam params, when run the SMPL example, I got results like the left image: It seems like all the points are squeezed together. Then I tried to enlarge the keypoints2d and it resutls like the right image. And I checked the resutls from smpl.forward(), all the betas are zeros. So I think the SMPL params just contain joint rotation , is this right? And I want to know how to use this smpl params for other characters in mixamo. Looking forward to your reply, thanks.
@ZhangMiaoZJU First thank you for your interests!
smpl_scaling
so the scaling in your visualization is not correct. Please refer to here on how to visualize SMPL joints onto images.I still have concern. I use your code but the results are still squeezed together. Is the smplx version or SMPL_MALE.pkl any different ?
@lanchen2019 Are you using this version of smplx? Official smplx library does not support the scaling factor, which I guess is the reason you get squeezed keypoints.
I changed the code to be like this:
keypoints3d = smpl.forward( global_orient=torch.from_numpy(smpl_poses[:, :3]).float(), body_pose=torch.from_numpy(smpl_poses[:, 3:]).float(), transl=0.* torch.from_numpy(smpl_trans).float(), scaling=0.* torch.from_numpy(smpl_scaling.reshape(1, 1)).float(), ).joints.detach().numpy() keypoints3d = keypoints3d * smpl_scaling + np.expand_dims(smpl_trans, axis=1)
and get image like this, which should be correct
It is a bit confusing why the original code set global_orient=smpl_poses[:, 0:1] , and global_orient=smpl_poses[:, :3] gets similar results.
Hi @YingZhangDUT, I don't think it is a correct way of using the smpl_poses
. This variable has shape [nframes, njoints, 3]
. So the global orientation is smpl_poses[:, 0:1]
and smpl_poses[:, 1:]
contains the rest of the keypoints.
As for the reason that it also works like usual, you may want to check the underlying SMPL code here, where these two parts are actually concatenated after they are passed in.
I am trying to project the keypoints viewed from the Camera of (0,0,0) exactly as the image through the world_to_camera method and additional methods. For example:
In above image, assuming that the camera was shot at (0,0,0) in the current 3d space, I would like to match the rotation and translation of the keypoints as shown in the photo above. (The photo above was adjusted manually and was not correct. Also, don't mind scattered points of any color other than the human keypoint below)
However, when the camera translations of the annotations are displayed in 3d space, they appear as follows compared to other keypoints. You can see that the height does not match with the human keypoints, and it is not surrounded by 360 degrees and is clustered in a specific area.
Looking at it a little further, it looks like this:
I think the coordinate scale of camera translation and human keypoints translation is different or there are other reasons. If what I didn't aware about something can you explain?
Thank you very much for providing the dataset.