facebookresearch / DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
http://densepose.org
Other
6.96k stars 1.3k forks source link

Generating the unwrapped textures #68

Open robbiebarrat opened 6 years ago

robbiebarrat commented 6 years ago

In the jupyter notebook for texture transfer, it talks about how to map the textures from the dataset onto models, but is there any way to generate these unwrapped textures from the images?

Also when I say "textures" i mean these - would love to be able to generate these from the models, or something similar to them. index

penincillin commented 6 years ago

I have the same question. Given a 2D image and the estimated UV-coordinates of this image. How to get the unwrapped textures of the humans in this 2D image. It there any code contained in this repo or any tools/open source code you recommend that could achieve this? Thanks!

robbiebarrat commented 6 years ago

@vkhalidov @ralpguler please - any pointers?

kinsou commented 6 years ago

@robbiebarrat Hello, is the problem solved?

robbiebarrat commented 6 years ago

@kinsou unfortunately not

mbaradad commented 5 years ago

Adapting the code from here to do the inverse (https://github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb) works:

    U = IUV[1, :,:]
    V = IUV[2, :,:]
    parts = list()
    for PartInd in xrange(1,25):    ## Set to xrange(1,23) to ignore the face part.
        actual_part = np.zeros((3, 200,200))
        x,y = np.where(IUV[0,:,:]==PartInd)
        if len(x) == 0:
          parts.append(actual_part)
          continue
        u_current_points = U[x,y]   #  Pixels that belong to this specific part.
        v_current_points = V[x,y]
        ##
        tex_map_coords = ((255-v_current_points)*199./255.).astype(int),(u_current_points*199./255.).astype(int)
        for c in range(3):
          actual_part[c, tex_map_coords[0],tex_map_coords[1]] = image[c, x,y]
        parts.append(actual_part)
ghost commented 5 years ago

you are genius! @mbaradad

robbiebarrat commented 5 years ago

@mbaradad @viperit i'm afraid im not as much of a genius - which line numbers of that notebook do i replace with this code chunk? I've tried a few different things but I haven't gotten it to work... If anyone has a gist of the full modified notebook i'd be really grateful. Thanks!!

ghost commented 5 years ago

def get_texture(im,IUV,solution = 32): 
    #
    #inputs:
    #   solution is the size of generated texture, in notebook provided by facebookresearch the solution is 200
    #   If use lager solution, the texture will be sparser and smaller solution result in denser texture. 
    #   im is original image
    #   IUV is densepose result of im
    #output:
    #   TextureIm, the 24 part texture of im according to IUV
    solution_float = float(solution) - 1 

    U = IUV[:,:,1]
    V = IUV[:,:,2]
    parts = list()
    for PartInd in range(1,25):    ## Set to xrange(1,23) to ignore the face part.
        actual_part = np.zeros((solution, solution, 3))
        x,y = np.where(IUV[:,:,0]==PartInd)
        if len(x) == 0:
            parts.append(actual_part)
            continue

        u_current_points = U[x,y]   #  Pixels that belong to this specific part.
        v_current_points = V[x,y]
        ##
        tex_map_coords = ((255-v_current_points)*solution_float/255.).astype(int),(u_current_points*solution_float/255.).astype(int)
        for c in range(3):
            actual_part[tex_map_coords[0],tex_map_coords[1], c] = im[x,y,c]
        parts.append(actual_part)

        TextureIm  = np.zeros([solution*6,solution*4,3]);

        for i in range(4):
            for j in range(6):
                TextureIm[ (solution*j):(solution*j+solution)  , (solution*i):(solution*i+solution) ,: ] = parts[i*6+j]

    plt.figure(figsize = (25,25))

    plt.imshow(TextureIm.transpose([1,0,2])[:,:,::-1]/255);
kekedan commented 5 years ago

Hi @robbiebarrat I have some code for this task. It is works for me. Maybe it will help you.

def get_texture(im,IUV,solution = 32): 
   #
   #inputs:
   #   solution is the size of generated texture, in notebook provided by facebookresearch the solution is 200
   #   If use lager solution, the texture will be sparser and smaller solution result in denser texture. 
   #   im is original image
   #   IUV is densepose result of im
   #output:
   #   TextureIm, the 24 part texture of im according to IUV
   solution_float = float(solution) - 1 

   U = IUV[:,:,1]
   V = IUV[:,:,2]
   parts = list()
   for PartInd in range(1,25):    ## Set to xrange(1,23) to ignore the face part.
   actual_part = np.zeros((solution, solution, 3))
   x,y = np.where(IUV[:,:,0]==PartInd)
   if len(x) == 0:
       parts.append(actual_part)
       continue

   u_current_points = U[x,y]   #  Pixels that belong to this specific part.
   v_current_points = V[x,y]
   ##
   tex_map_coords = ((255-v_current_points)*solution_float/255.).astype(int),(u_current_points*solution_float/255.).astype(int)
   for c in range(3):
       actual_part[tex_map_coords[0],tex_map_coords[1], c] = im[x,y,c]
   parts.append(actual_part)

   TextureIm  = np.zeros([solution*6,solution*4,3]);

   for i in range(4):
       for j in range(6):
           TextureIm[ (solution*j):(solution*j+solution)  , (solution*i):(solution*i+solution) ,: ] = parts[i*6+j]

   plt.figure(figsize = (25,25))

   plt.imshow(TextureIm.transpose([1,0,2])[:,:,::-1]/255);

hi,thank you for you code,but I have some issue for this, I find the body part's position in textures image is different from demo. may be I use wrong?

ghost commented 5 years ago

@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.

kekedan commented 5 years ago

@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.

thanks , the result like this: 001 iuv: 002 result: 003 you can see,the result is wrong,any advice?thanks

mbaradad commented 5 years ago

@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:

valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8')
radius_increase = 10
kernel = np.ones((radius_increase, radius_increase), np.uint8)
dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
region_to_fill = dilated_mask - valid_mask
invalid_region = 1 - valid_mask
actual_part_max = actual_part.max()
actual_part_min = actual_part.min()
actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8')
actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1))
actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min
#only use dilated part
actual_part = actual_part * dilated_mask
parts.append(actual_part)
ghost commented 5 years ago

@kekedan Hi, your IUV map look like right. But the extracted texture is wrong. Do you use the function I provided directly or make some change?

kekedan commented 5 years ago

@kekedan Hi, your IUV map look like right. But the extracted texture is wrong. Do you use the function I provided directly or make some change?

I make some change,because i find the result looks like wrong ,and the follow image is directly run by your function: tim 20190106102419

ghost commented 5 years ago

@kekedan Hi, good to see you again! I think my function's result is right. So, if you want to more clear result, just try to a small solution.

kekedan commented 5 years ago

@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:

valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8')
radius_increase = 10
kernel = np.ones((radius_increase, radius_increase), np.uint8)
dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
region_to_fill = dilated_mask - valid_mask
invalid_region = 1 - valid_mask
actual_part_max = actual_part.max()
actual_part_min = actual_part.min()
actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8')
actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1))
actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min
#only use dilated part
actual_part = actual_part * dilated_mask
parts.append(actual_part)

great,you you are right !,thanks

ghost commented 5 years ago

@kekedan I found my problem! The block should be indent but it doesn't! Btw, I have test your example by downloading the image and IUV image. I have found that the shape of IUV image is bigger than image. Is it right?

kekedan commented 5 years ago

@kekedan I found my problem! The block should be indent but it doesn't! Btw, I have test your example by downloading the image and IUV image. I have found that the shape of IUV image is bigger than image. Is it right? I correct the indent ,the image upload is cut. so your code is right,but the result image is too blurred ,and I find the @mbaradad 's method can solve it . thank you ~

BostonLobster commented 5 years ago

@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:

valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8')
radius_increase = 10
kernel = np.ones((radius_increase, radius_increase), np.uint8)
dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
region_to_fill = dilated_mask - valid_mask
invalid_region = 1 - valid_mask
actual_part_max = actual_part.max()
actual_part_min = actual_part.min()
actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8')
actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1))
actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min
#only use dilated part
actual_part = actual_part * dilated_mask
parts.append(actual_part)

@mbaradad Hi, thanks for your great code. However, when I use this chunk of code, I got the following error which related to the cv2.inpaint() function:

error: OpenCV(3.4.4) /io/opencv/modules/photo/src/inpaint.cpp:759: error: (-210:Unsupported format or combination of formats) 8-bit, 16-bit unsigned or 32-bit float 1-channel and 8-bit 3-channel input/output images are supported in function 'cvInpaint'

It says the data formats are not supported, however, what I print out is that the input img and mask are both 'uint8'. I don't know which part is wrong, any ideas?

shen113 commented 5 years ago

@kekedan @viperit Or who can provide me with the complete notebook please. I really have no idea how to modify the code. Thanks!!

AndroXD commented 5 years ago

Can some kind developer please share a working notebook for this? That would instantly cure the headache I've developed after fighting another problem driven by pure brute stubbornness for days, that drained me quite a lot and I just wish to see the end of the tunnel, any help to lessen this burden would be much appreciated, thanks!

garimss commented 5 years ago

@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.

thanks , the result like this:

iuv:

result:

you can see,the result is wrong,any advice?thanks

@kekedan can you please show full code for this. how to use it with this https://github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb

I want to use my own texture. How can I apply your code with my own image?

hysoka79 commented 5 years ago

@garimss I have combined the above code with Densepose. https://colab.research.google.com/drive/1KJ0VucKXD9-nwWPL8oHrl-Zk-iOMdOLt Hope it will help. \ouo/

kuboshizuma commented 4 years ago

I created a Python library ( https://github.com/kuboshizuma/UVTextureConverter ) to make the conversion easy. And the notebook in it should help resolve this issue. I hope it will be useful for those who are still in trouble.