DenisTome / Lifting-from-the-Deep-release

Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"
https://denistome.github.io/papers/lifting-from-the-deep
GNU General Public License v3.0
450 stars 132 forks source link

Where is the part of lifting 2D belief-maps, projected 2D pose and fusion layer in code? #31

Closed HanlunAI closed 5 years ago

HanlunAI commented 5 years ago

In my understanding, an input image would go through the PersonNet (to find out position of each person), PoseNet (to get the most confident pixel as the location of each landmark), and Prob3dPose (to calculate the 3D pose).

In Prob3dPose, the paper introduced steps on lifting 2D belief-maps into 3D, projecting to a new 2D pose belief maps and training weights for 2D fusion. Where can we find codes on this part?

Please help. Thank you.

HanlunAI commented 5 years ago

You also mentioned that these novel layers were implemented as extension of the Convolutional Pose Machines. I guess we can find the codes there.

DenisTome commented 5 years ago

That part is not available on the public repository. The repository contains only the final operations of taking the heat maps and lifting them in 3d.

On Fri, 30 Nov 2018, 08:20 HanlunAI <notifications@github.com wrote:

You also mentioned that these novel layers were implemented as extension of the Convolutional Pose Machines. I guess we can find the codes there.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/DenisTome/Lifting-from-the-Deep-release/issues/31#issuecomment-443126940, or mute the thread https://github.com/notifications/unsubscribe-auth/AIyQFsoTxQUHsAulCkkZyVT16hJUu1zGks5u0OpLgaJpZM4Yui8x .

HanlunAI commented 5 years ago

Thank you!