mks0601 / I2L-MeshNet_RELEASE

Official PyTorch implementation of "I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image", ECCV 2020
MIT License
724 stars 127 forks source link

Have you used 3d loss for fitting about simplify-X? #84

Open zhLawliet opened 3 years ago

zhLawliet commented 3 years ago

Thanks a lot for sharing the Simplify-X fits for H36M. Did you use 3d loss when fitting? I found that his side view is slanted, indicating that the depth is incorrect 企业微信截图_da506ca2-92b9-4453-a86b-dff84b1d845e

mks0601 commented 3 years ago

The fits are in world coordinate system. You should apply camera extrinsics for other view redering.

zhLawliet commented 3 years ago

@mks0601 thanks,i find you have merged root pose and camera rotation.

merge root pose and camera rotation

    root_pose = smpl_pose[self.root_joint_idx,:].numpy()
    root_pose, _ = cv2.Rodrigues(root_pose)
    root_pose, _ = cv2.Rodrigues(np.dot(R,root_pose))
    smpl_pose[self.root_joint_idx] = torch.from_numpy(root_pose).view(3)

I try to understand what you mean is that the R corresponds to the x-y view,the R is not suitable for z-y view? if I want to show the z-y, I need other R? the code is:

smpl_mesh_coord, smpl_joint_coord = self.smpl.layer['neutral'](smpl_pose, smpl_shape) smpl_mesh_coord = smpl_mesh_coord.numpy().astype(np.float32).reshape(-1,3); fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] xyz->zyx fit_mesh_coord_cam = (fit_mesh_coord_cam + 1)/2 * 255 vis(fit_mesh_coord_cam)

mks0601 commented 3 years ago

I can't understand your question.. R is just a rotation matrix, included in the camera extrinsic parameter.

zhLawliet commented 3 years ago

yes, the camera extrinsic parameter include the R and the T ,I think the fit_mesh_coord_cam have applied camera extrinsics by the " merge root pose and camera rotation". but the his side view is slanted.

mks0601 commented 3 years ago

How did you visualize your results?

zhLawliet commented 3 years ago

The code is for side view: pose, shape, trans = smpl_param['pose'], smpl_param['shape'], smpl_param['trans'] smpl_pose = torch.FloatTensor(pose).view(-1,3); smpl_shape = torch.FloatTensor(shape).view(1,-1); # smpl parameters (pose: 72 dimension, shape: 10 dimension) R, t = np.array(cam_param['R'], dtype=np.float32).reshape(3,3), np.array(cam_param['t'], dtype=np.float32).reshape(3) # camera rotation and translation

merge root pose and camera rotation

    root_pose = smpl_pose[self.root_joint_idx,:].numpy()
    root_pose, _ = cv2.Rodrigues(root_pose)
    root_pose, _ = cv2.Rodrigues(np.dot(R,root_pose))
    smpl_pose[self.root_joint_idx] = torch.from_numpy(root_pose).view(3)
    smpl_mesh_coord, smpl_joint_coord = self.smpl.layer['neutral'](smpl_pose, smpl_shape)
    smpl_mesh_coord = smpl_mesh_coord.numpy().astype(np.float32).reshape(-1,3);
    fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] #xyz->zyx
    fit_mesh_coord_cam = (fit_mesh_coord_cam + 1)/2 * 255
    fit_mesh_coord_cam = vis_mesh(img.copy(), fit_mesh_coord_cam, radius=1,color = (0,0,255),IS_cmap = False)
mks0601 commented 3 years ago

what is this line?

fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] #xyz->zyx

and why not apply extrinsic translation?

mks0601 commented 3 years ago

Could you follow my codes in Human36M/Human36M.py?

zhLawliet commented 3 years ago

yes, I follow your codes in Human36M/Human36M.py,i can get right result about front view, which have applied extrinsic translation(R,T) and internal parameters(cam_param['focal'], cam_param['princpt']) image

企业微信截图_593f1457-e879-441a-b59d-1a564cde4bd1

the original coordinate system is x,y,z , this line is to convert the coordinates xyz->xzy for side view, I think the smpl_mesh_coord have applied extrinsic translation by the "merge root pose and camera rotation",there is no internal parameters for xzy, so I just want to visualize the orientation of the whole about side view.

mks0601 commented 3 years ago

I can't get what is 'internal parameters'. You can just apply extrinsics without axis transpose like xyz->zyx.

zhLawliet commented 3 years ago

thanks,the 'internal parameters' is cam_param['focal'] and cam_param['princpt'],there is just one extrinsics for front view, now I want to visualize the orientation of the whole about side view. my unclear description may confuse you. Maybe I need to change the question: How can I get the correct side view?

mks0601 commented 3 years ago

The extrinsics are defined for all camera viewpoints. You can apply extrinsics of the side viewpoint.

zhLawliet commented 3 years ago

Thanks for your patient reply, I try it

zhLawliet commented 3 years ago

@mks0601 Can you provide the benchmark code for 3DPW challenge? how can I reproduce the competition performance image

mks0601 commented 3 years ago

Most of codes of the winning entry of 3DPW challenge is based on this repo. The tracking codes are newly added though.

zhLawliet commented 3 years ago

Thank you for your reply,your I2L-MeshNet wons the first and second place at 3DPW challenge on unknown assocation track which is not allowed to use ground truth data in any form, so how can you get right person for multi person. Another question is that "bbox_root_pw3d_output.json" is just for 3DPW_test.json, but the above 3DPW challenge use the entire dataset including its train, validation and test splits for evaluation. so It's my pleasure that you can release this part of the code about the ECCV2020 3DPW challenge.

mks0601 commented 3 years ago

Q. how can you get right person for multi person -> I used yolov5 human detector. Q. I used param stage of I2L-MeshNet, so rootnet output is not required.

zhLawliet commented 3 years ago

thank you, I understand. can release the part of the code that submit the result for ECCV2020 3DPW challenge

mks0601 commented 3 years ago

Sorry I don't have codes for the 3DPW challenge. But there is no big change from this repo.

zhLawliet commented 3 years ago

thanks,I try it

zhLawliet commented 3 years ago

@mks0601 Can you share your all results of yolov4 for 3DPW, which is used for 3DPW challenge. There is only the YOLO.json for test datasets: "data/PW3D/Human_detection_result/YOLO.json". I tried to get bbox through yolo4 by myself, but it couldn’t match yours effectively, thanks.

mks0601 commented 3 years ago

Sorry we don't have. Which problem do you have?

zhLawliet commented 3 years ago

This should be a tracking issue,I want to reproduce your competition performance which wons the first and second place at 3DPW challenge on unknown assocation track. There are multiple candidate boxs in each frame by yolov4. How to choose the best matching box, especially for multiple people and scenes with overlapping characters. such as image image

mks0601 commented 3 years ago

Most of codes of the winning entry of 3DPW challenge is based on this repo. The tracking codes are newly added though.

we added human tracking codes as mentioned above.

zhLawliet commented 3 years ago

ok, thanks