facebookresearch / DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
http://densepose.org
Other
6.95k stars 1.3k forks source link

I have one question about densepose result! #74

Closed yagee97 closed 6 years ago

yagee97 commented 6 years ago

I succeeded densepose test for video with reference to (https://github.com/trrahul/densepose-video). while tests, i got a question.

**1. When I detect an object using densepose, where are the keypoints' coordination stored? (which variable?) Example) get_keypoints() in keypoints.py ?

  1. What information can I use in densepose data?**

Thank you! i want to your detail opinion.

lushihan commented 6 years ago

Same question here

fire17 commented 6 years ago

me 3

yagee97 commented 6 years ago

@lushihan where..? sorry..

lushihan commented 6 years ago

@yagee97 Oh I mean I have the same questions as yours

yagee97 commented 6 years ago

@lushihan oh... i got it now.. i want to solve this question.

lushihan commented 6 years ago

Still no authors response :(

vkhalidov commented 6 years ago

@yagee97 To get keypoints, you need to train the model with the keypoints head. Please have a look at #34, #39, #48

yagee97 commented 6 years ago

Thank you for your reply! I have question one more.

What information can I use in densepose data? I want to get some coordinate such as All_Coords in vis.py

because I'm going to utilize person's 2d/3d coordinate at densepose result.

How I get 2D/3D coordinate? Where?

Thank you! :) Have a good day!

vkhalidov commented 6 years ago

The output of the DensePose head is generated here. You can see that for every detected person bounding box of size (H, W), you get the output of size (3, H, W). The first channel contains part index, the other 2 channels contain regressed inner coordinate values U and V for the corresponding part. Thus 2D image coordinates are obtained from bounding box coordinates + pixel offset within the bounding box. To understand how to map the estimated IUV values to the 3D SMPL models, please have a look at DensePose-COCO-on-SMPL.ipynb notebook

yagee97 commented 6 years ago

oh really thank you.! i'v got it this problem. So let me ask you one last question. I want to store the im_detect_body_uv result. so I have tried to print its output. but print result is [0,0,0,0, .....] !

How can I print or store them separately? and when visualize the densepose, are there rules that objects coloring?

very thank you for your kind! :)

vkhalidov commented 6 years ago

There are many possibilities to store results, for example pickle, numpy, json.

For visualization, I suggest you to check visualization and texture transfer notebooks

anweiwei commented 6 years ago

I want to use my own image to generate I,U,V and to visualize on the SMPL mode. Then I got the IUV output [3,H,W]. and the output[0], output[1] ,output[2] is correspondent to I,U,V respectively. However, when I reference the DensePose-COCO-on-SMPL.ipynb, in the demo_dp_single_ann.pkl, where I,U,V is a vector(length 125). So, my question is how can I use the output[3,H,W] to do the visualization, or can you provide the code to generate the demo_dp_single_ann.pkl?

roboticlemon commented 6 years ago

Same question as anweiwei, how do we generate that demo_dp_single_ann.pkl file or do the visualisation given the IUV output for an arbritary image

jaggernaut007 commented 5 years ago

@ingramator @anweiwei Did you manage to get the xyz from iuv output? do share your code. maybe we can collab and find a fix? I have the IUV output from the model. Cant make sense of it.

roboticlemon commented 5 years ago

@jaggernaut007 I have this working now are you still interested in seeing it?

kalyo-zjl commented 5 years ago

@ingramator Hi, how do you make it working? My IUV output from infer_simple.py doesn't seem to fit well when mapping it to the SMPL model. Could you please share the script?

roboticlemon commented 5 years ago

@kalyo-zjl @jaggernaut007 check the pull request #99 it provides an excellent sample notebook that shows how its done! I am at this stage trying to work backwards. For instance how do I map a specific vertex on the SMPL model to RGB input image. Does anyone have any ideas?

kalyo-zjl commented 5 years ago

@ingramator Thank you!

vkhalidov commented 5 years ago

@ingramator this is not straightforward. What you're up to is 3D reconstruction based on 2D manifold coordinates. This can be done through reprojection error minimization for visible parts. You can try looking into bundle adjustment, ceres from Google can be a good starting point

kekedan commented 5 years ago

Does anyone have any ideas

the same problem , so do you have any ideas?

wine3603 commented 5 years ago

Hi guys, I am following the notebook of https://github.com/facebookresearch/DensePose/pull/99 but I cannot show the points on the smpl, the points on the picked person is always (0, 3), with pick_idx=1. Does it work well in your cases?