bharat-b7 / MultiGarmentNetwork

Repo for "Multi-Garment Net: Learning to Dress 3D People from Images, ICCV'19"
282 stars 65 forks source link

How to get the SMPL pose parameters predicted by MGN ? #24

Closed Qingcsai closed 4 years ago

Qingcsai commented 4 years ago

Hi, @bharat-b7 , you said MGN predicts the pose parameters of SMPL in https://github.com/bharat-b7/MultiGarmentNetwork/issues/6#issuecomment-557070246.
But I still can't find the right way of getting it, could you give me some more details? For example which variable represents the pose parameters predicted by MGN in the code? Thanks for you kindly reply!

Qingcsai commented 4 years ago

Besides, in the paper I saw there is a quantitative comparison experiment of garment mean vertex-to-surface error with GT and predicted poses, and I think the predicted pose here refers to the pose of MGN predicted, but what's the meaning of 'GT(ground-truth)' pose? And how to get the GT pose values? Looking forward to your reply, thank you!

bharat-b7 commented 4 years ago

Hi, @bharat-b7 , you said MGN predicts the pose parameters of SMPL in #6 (comment). But I still can't find the right way of getting it, could you give me some more details? For example which variable represents the pose parameters predicted by MGN in the code? Thanks for you kindly reply!

after calling MGN such as at: https://github.com/bharat-b7/MultiGarmentNetwork/blob/fbd22e4012ee835abe1210fcf1a5ae2fae145ed5/test_network.py#L55

out['pose_'] would give you the predicted pose corresponding to the image.

bharat-b7 commented 4 years ago

Besides, in the paper I saw there is a quantitative comparison experiment of garment mean vertex-to-surface error with GT and predicted poses, and I think the predicted pose here refers to the pose of MGN predicted, but what's the meaning of 'GT(ground-truth)' pose? And how to get the GT pose values? Looking forward to your reply, thank you!

In order to ablate the error in shape estimation due to incorrect pose prediction, we trained MGN to only predict the shape parts and used GT pose for the image. GT pose can be obtained by either registering the 3D scan or using openpose + bundle adjustment if you have multiview images.

Qingcsai commented 4 years ago

Thanks for your quick reply! I got the predicted pose corresponding to the imagex following your advice ```out['pose']```, but I am now confused of how to get the final pose of a single smpl model? Do we need to compute the mean pose parameters corresponding to these N images? I mean there are N pose parameters corresponding to N images, but a the final output smpl only needs one set pose parameters.

Hope my problem is described accurately enough, and looking forward to your reply!
Thank you!

LiuYuZzz commented 4 years ago

I got the pose parameters with the shape [n, 24, 3, 3](n means batch size) predicted by MGN, but the input pose parameters` shape is 72 in dress_smpl.py, how to deal with it? Should I use Rodriguez formula to transfer it?

bharat-b7 commented 4 years ago

Thanks for your quick reply! I got the predicted pose corresponding to the imagex following your advice `out['pose']`, but I am now confused of how to get the final pose of a single smpl model? Do we need to compute the mean pose parameters corresponding to these N images? I mean there are N pose parameters corresponding to N images, but a the final output smpl only needs one set pose parameters.

Hope my problem is described accurately enough, and looking forward to your reply! Thank you!

Since each input image can have a different pose, MGN predicts SMPL pose parameters per frame. Merging the predicted pose depends on the input and the use case.

bharat-b7 commented 4 years ago

I got the pose parameters with the shape [n, 24, 3, 3](n means batch size) predicted by MGN, but the input pose parameters` shape is 72 in dress_smpl.py, how to deal with it? Should I use Rodriguez formula to transfer it?

24 x 3 x 3 corresponds to 24 joint rotation matrices (each 3 x 3). you can convert the rotation matrices to axis-angle format for SMPL.

neonb88 commented 4 years ago

Hi @bharat-b7 , thanks as always!

Is the Euler–Rodrigues formula the right format for the rotation matrices from out['pose_<image_number>']?

Euler-Rodrigues rotation matrix

neonb88 commented 4 years ago

And 2nd @bharat-b7 , like LiuYuZzz asked, is there code that automatically converts those poses from out['pose_<image_number>'] to the 3 angles per joint? (because (24,3,3)==out['pose_<image_number>'].shape but (72,)==smpl.pose.shape in line 123 of dress_SMPL.py)

neonb88 commented 4 years ago

I've tried angles, _ = cv2.Rodrigues(out['pose_0'][0,0]), but I'm not sure whether it's working. I'm like 90% sure it's not the formalism from the Rodrigues Rotation Formula, though, since the value of out['pose_0'][0,0] I saw was asymmetric

neonb88 commented 4 years ago

I got the pose parameters with the shape [n, 24, 3, 3](n means batch size) predicted by MGN, but the input pose parameters` shape is 72 in dress_smpl.py, how to deal with it? Should I use Rodriguez formula to transfer it?

@LiuYuZzz did you get some code working?