Nicholasli1995 / EvoSkeleton

Official project website for the CVPR 2020 paper (Oral Presentation) "Cascaded deep monocular 3D human pose estimation wth evolutionary training data"
https://arxiv.org/abs/2006.07778
MIT License
332 stars 44 forks source link

projection from 3d pose to 2d #31

Closed NoLookDefense closed 3 years ago

NoLookDefense commented 3 years ago

Hello. Thanks for your excellent work. I wonder after you obtain the 3d keypoint during evolution, do you randomly assume the camera location, and project the 3d keypoint to target the cameras? Does your project have the demo example?

Nicholasli1995 commented 3 years ago

Hello. Thanks for your excellent work. I wonder after you obtain the 3d keypoint during evolution, do you randomly assume the camera location, and project the 3d keypoint to target the cameras? Does your project have the demo example?

Hi, the camera parameters can be set manually. The user can control the position of the camera relative to the subject. By default, the synthetic 3D skeletons are projected by H36M camera parameters. This means the synthetic skeletons are moved to the front of the 4 cameras provided by the H36M dataset.

For the projection process, see: https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L683

NoLookDefense commented 3 years ago

Hello. Thanks for your excellent work. I wonder after you obtain the 3d keypoint during evolution, do you randomly assume the camera location, and project the 3d keypoint to target the cameras? Does your project have the demo example?

Hi, the camera parameters can be set manually. The user can control the position of the camera relative to the subject. By default, the synthetic 3D skeletons are projected by H36M camera parameters. This means the synthetic skeletons are moved to the front of the 4 cameras provided by the H36M dataset.

For the projection process, see:

https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L683

Very happy for your reply! I saw that the input of this function is the dictionary of 3d pose position and camera. I wonder what's the data format of the dictionaries? Or are there any examples using the function directly? I can debug it myself. Thank you so much.

Nicholasli1995 commented 3 years ago

Hello. Thanks for your excellent work. I wonder after you obtain the 3d keypoint during evolution, do you randomly assume the camera location, and project the 3d keypoint to target the cameras? Does your project have the demo example?

Hi, the camera parameters can be set manually. The user can control the position of the camera relative to the subject. By default, the synthetic 3D skeletons are projected by H36M camera parameters. This means the synthetic skeletons are moved to the front of the 4 cameras provided by the H36M dataset. For the projection process, see: https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L683

Very happy for your reply! I saw that the input of this function is the dictionary of 3d pose position and camera. I wonder what's the data format of the dictionaries? Or are there any examples using the function directly? I can debug it myself. Thank you so much.

The camera parameters are specified with R (3D rotation), T (3D translation), f (focal length), c (image plane center), k (radial distortion), p (tangential distortion), name https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L100

See the following function for more details: https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/cameras.py#L8

This deprecated function can visualize relative position of cameras and subjects. You may take a look: https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L95

NoLookDefense commented 3 years ago

Thanks for your reply! I will study it

NoLookDefense commented 3 years ago

Sorry for the disturbance. I still have some questions about your project. I try to run "python evolve.py -generate True" to generate some 2D-3D pairs. The code is running successfully. But I saw that the size of "initial_population" has a size of (389983, 96), meaning that each sample has 96 coordinates(maybe 32 joints?), which is larger that the annotation size of human36m. I wonder that what's the meaning of the remaining parameters? And what's the father-and-son-relationship among these joints?

Nicholasli1995 commented 3 years ago

Sorry for the disturbance. I still have some questions about your project. I try to run "python evolve.py -generate True" to generate some 2D-3D pairs. The code is running successfully. But I saw that the size of "initial_population" has a size of (389983, 96), meaning that each sample has 96 coordinates(maybe 32 joints?), which is larger that the annotation size of human36m. I wonder that what's the meaning of the remaining parameters? And what's the father-and-son-relationship among these joints?

Only 17 joints out of 32 are used during training and inference. You can remove the unused ones without any negative effect. The remainings are kept just to be compatible with previous works: https://github.com/una-dinosauria/3d-pose-baseline.

https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/dataset/h36m/data_utils.py#L23

NoLookDefense commented 3 years ago

Thank you. I am clear now. BTW, I saw that your constraints of the enhanced pose is loaded by "jointAngleModel_v2.npy", and you said that there will be some official documents. https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/skeleton/anglelimits.py#L15 I wonder where can I find the related documents?

Nicholasli1995 commented 3 years ago

Thank you. I am clear now. BTW, I saw that your constraints of the enhanced pose is loaded by "jointAngleModel_v2.npy", and you said that there will be some official documents. https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/skeleton/anglelimits.py#L15

I wonder where can I find the related documents?

Hi, you can download the documents at http://poseprior.is.tue.mpg.de/overview. This document is used for the old MATLAB implementation. However, the usage is similar in this Python implementation.

NoLookDefense commented 3 years ago

Thank you. I am clear now. BTW, I saw that your constraints of the enhanced pose is loaded by "jointAngleModel_v2.npy", and you said that there will be some official documents. https://github.com/Nicholasli1995/EvoSkeleton/blob/f31e7c2e453cfb01cbb343c71f3d94dcf98efc4f/libs/skeleton/anglelimits.py#L15

I wonder where can I find the related documents?

Hi, you can download the documents at http://poseprior.is.tue.mpg.de/overview. This document is used for the old MATLAB implementation. However, the usage is similar in this Python implementation.

Thank you!