Closed yagee97 closed 6 years ago
Same question here
me 3
@lushihan where..? sorry..
@yagee97 Oh I mean I have the same questions as yours
@lushihan oh... i got it now.. i want to solve this question.
Still no authors response :(
@yagee97 To get keypoints, you need to train the model with the keypoints head. Please have a look at #34, #39, #48
Thank you for your reply! I have question one more.
What information can I use in densepose data? I want to get some coordinate such as All_Coords in vis.py
because I'm going to utilize person's 2d/3d coordinate at densepose result.
How I get 2D/3D coordinate? Where?
Thank you! :) Have a good day!
The output of the DensePose head is generated here. You can see that for every detected person bounding box of size (H, W)
, you get the output of size (3, H, W)
. The first channel contains part index, the other 2 channels contain regressed inner coordinate values U
and V
for the corresponding part. Thus 2D image coordinates are obtained from bounding box coordinates + pixel offset within the bounding box. To understand how to map the estimated IUV
values to the 3D SMPL models, please have a look at DensePose-COCO-on-SMPL.ipynb notebook
oh really thank you.! i'v got it this problem. So let me ask you one last question.
I want to store the im_detect_body_uv
result.
so I have tried to print its output. but print result is [0,0,0,0, .....] !
How can I print or store them separately? and when visualize the densepose, are there rules that objects coloring?
very thank you for your kind! :)
There are many possibilities to store results, for example pickle, numpy, json.
For visualization, I suggest you to check visualization and texture transfer notebooks
I want to use my own image to generate I,U,V and to visualize on the SMPL mode. Then I got the IUV output [3,H,W]. and the output[0], output[1] ,output[2] is correspondent to I,U,V respectively. However, when I reference the DensePose-COCO-on-SMPL.ipynb, in the demo_dp_single_ann.pkl, where I,U,V is a vector(length 125). So, my question is how can I use the output[3,H,W] to do the visualization, or can you provide the code to generate the demo_dp_single_ann.pkl?
Same question as anweiwei, how do we generate that demo_dp_single_ann.pkl file or do the visualisation given the IUV output for an arbritary image
@ingramator @anweiwei Did you manage to get the xyz from iuv output? do share your code. maybe we can collab and find a fix? I have the IUV output from the model. Cant make sense of it.
@jaggernaut007 I have this working now are you still interested in seeing it?
@ingramator Hi, how do you make it working? My IUV output from infer_simple.py doesn't seem to fit well when mapping it to the SMPL model. Could you please share the script?
@kalyo-zjl @jaggernaut007 check the pull request #99 it provides an excellent sample notebook that shows how its done! I am at this stage trying to work backwards. For instance how do I map a specific vertex on the SMPL model to RGB input image. Does anyone have any ideas?
@ingramator Thank you!
@ingramator this is not straightforward. What you're up to is 3D reconstruction based on 2D manifold coordinates. This can be done through reprojection error minimization for visible parts. You can try looking into bundle adjustment, ceres from Google can be a good starting point
Does anyone have any ideas
the same problem , so do you have any ideas?
Hi guys, I am following the notebook of https://github.com/facebookresearch/DensePose/pull/99 but I cannot show the points on the smpl, the points on the picked person is always (0, 3), with pick_idx=1. Does it work well in your cases?
I succeeded densepose test for video with reference to (https://github.com/trrahul/densepose-video). while tests, i got a question.
**1. When I detect an object using densepose, where are the keypoints' coordination stored? (which variable?) Example) get_keypoints() in keypoints.py ?
Thank you! i want to your detail opinion.