facebookresearch / DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
http://densepose.org
Other
6.97k stars 1.3k forks source link

Can I use IUV and INDS outputs to build a 2D semantic part segmentation? #252

Open lyrgwlr opened 5 years ago

lyrgwlr commented 5 years ago

I want to get a segmentaion that can show different human parts like left arm/right arm... The IUV output of Densepose seems containing 24 parts of human information. And The INDS output of Densepose indicates the contour of human. So, can I combine these two outputs to get the semantic part segmentation? Can this idea work? Please give me some hints. Thanks a lot.

SriRamGovardhanam commented 5 years ago

hey , did you tried it , because i do have the same question just to know whether it worked or not!

lyrgwlr commented 5 years ago

@SriRamGovardhanam I made it. Here is a simple hint. The first dimension of IUV output image is range from 1-24, meaning 24 parts of the human in the image(you can draw it). So you can get the locations of different parts of human. Then you just need to new a array with [h, w, 3] dimensions and select some colors you like and change the RGB of this array. Last you can get the image like the human part segmentation output.