Open robbiebarrat opened 6 years ago
I have the same question. Given a 2D image and the estimated UV-coordinates of this image. How to get the unwrapped textures of the humans in this 2D image. It there any code contained in this repo or any tools/open source code you recommend that could achieve this? Thanks!
@vkhalidov @ralpguler please - any pointers?
@robbiebarrat Hello, is the problem solved?
@kinsou unfortunately not
Adapting the code from here to do the inverse (https://github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb) works:
U = IUV[1, :,:]
V = IUV[2, :,:]
parts = list()
for PartInd in xrange(1,25): ## Set to xrange(1,23) to ignore the face part.
actual_part = np.zeros((3, 200,200))
x,y = np.where(IUV[0,:,:]==PartInd)
if len(x) == 0:
parts.append(actual_part)
continue
u_current_points = U[x,y] # Pixels that belong to this specific part.
v_current_points = V[x,y]
##
tex_map_coords = ((255-v_current_points)*199./255.).astype(int),(u_current_points*199./255.).astype(int)
for c in range(3):
actual_part[c, tex_map_coords[0],tex_map_coords[1]] = image[c, x,y]
parts.append(actual_part)
you are genius! @mbaradad
@mbaradad @viperit i'm afraid im not as much of a genius - which line numbers of that notebook do i replace with this code chunk? I've tried a few different things but I haven't gotten it to work... If anyone has a gist of the full modified notebook i'd be really grateful. Thanks!!
def get_texture(im,IUV,solution = 32):
#
#inputs:
# solution is the size of generated texture, in notebook provided by facebookresearch the solution is 200
# If use lager solution, the texture will be sparser and smaller solution result in denser texture.
# im is original image
# IUV is densepose result of im
#output:
# TextureIm, the 24 part texture of im according to IUV
solution_float = float(solution) - 1
U = IUV[:,:,1]
V = IUV[:,:,2]
parts = list()
for PartInd in range(1,25): ## Set to xrange(1,23) to ignore the face part.
actual_part = np.zeros((solution, solution, 3))
x,y = np.where(IUV[:,:,0]==PartInd)
if len(x) == 0:
parts.append(actual_part)
continue
u_current_points = U[x,y] # Pixels that belong to this specific part.
v_current_points = V[x,y]
##
tex_map_coords = ((255-v_current_points)*solution_float/255.).astype(int),(u_current_points*solution_float/255.).astype(int)
for c in range(3):
actual_part[tex_map_coords[0],tex_map_coords[1], c] = im[x,y,c]
parts.append(actual_part)
TextureIm = np.zeros([solution*6,solution*4,3]);
for i in range(4):
for j in range(6):
TextureIm[ (solution*j):(solution*j+solution) , (solution*i):(solution*i+solution) ,: ] = parts[i*6+j]
plt.figure(figsize = (25,25))
plt.imshow(TextureIm.transpose([1,0,2])[:,:,::-1]/255);
Hi @robbiebarrat I have some code for this task. It is works for me. Maybe it will help you.
def get_texture(im,IUV,solution = 32): # #inputs: # solution is the size of generated texture, in notebook provided by facebookresearch the solution is 200 # If use lager solution, the texture will be sparser and smaller solution result in denser texture. # im is original image # IUV is densepose result of im #output: # TextureIm, the 24 part texture of im according to IUV solution_float = float(solution) - 1 U = IUV[:,:,1] V = IUV[:,:,2] parts = list() for PartInd in range(1,25): ## Set to xrange(1,23) to ignore the face part. actual_part = np.zeros((solution, solution, 3)) x,y = np.where(IUV[:,:,0]==PartInd) if len(x) == 0: parts.append(actual_part) continue u_current_points = U[x,y] # Pixels that belong to this specific part. v_current_points = V[x,y] ## tex_map_coords = ((255-v_current_points)*solution_float/255.).astype(int),(u_current_points*solution_float/255.).astype(int) for c in range(3): actual_part[tex_map_coords[0],tex_map_coords[1], c] = im[x,y,c] parts.append(actual_part) TextureIm = np.zeros([solution*6,solution*4,3]); for i in range(4): for j in range(6): TextureIm[ (solution*j):(solution*j+solution) , (solution*i):(solution*i+solution) ,: ] = parts[i*6+j] plt.figure(figsize = (25,25)) plt.imshow(TextureIm.transpose([1,0,2])[:,:,::-1]/255);
hi,thank you for you code,but I have some issue for this, I find the body part's position in textures image is different from demo. may be I use wrong?
@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.
@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.
thanks , the result like this: iuv: result: you can see,the result is wrong,any advice?thanks
@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:
valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8')
radius_increase = 10
kernel = np.ones((radius_increase, radius_increase), np.uint8)
dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
region_to_fill = dilated_mask - valid_mask
invalid_region = 1 - valid_mask
actual_part_max = actual_part.max()
actual_part_min = actual_part.min()
actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8')
actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1))
actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min
#only use dilated part
actual_part = actual_part * dilated_mask
parts.append(actual_part)
@kekedan Hi, your IUV map look like right. But the extracted texture is wrong. Do you use the function I provided directly or make some change?
@kekedan Hi, your IUV map look like right. But the extracted texture is wrong. Do you use the function I provided directly or make some change?
I make some change,because i find the result looks like wrong ,and the follow image is directly run by your function:
@kekedan Hi, good to see you again! I think my function's result is right. So, if you want to more clear result, just try to a small solution.
@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:
valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8') radius_increase = 10 kernel = np.ones((radius_increase, radius_increase), np.uint8) dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1) region_to_fill = dilated_mask - valid_mask invalid_region = 1 - valid_mask actual_part_max = actual_part.max() actual_part_min = actual_part.min() actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8') actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1)) actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min #only use dilated part actual_part = actual_part * dilated_mask parts.append(actual_part)
great,you you are right !,thanks
@kekedan I found my problem! The block should be indent but it doesn't! Btw, I have test your example by downloading the image and IUV image. I have found that the shape of IUV image is bigger than image. Is it right?
@kekedan I found my problem! The block should be indent but it doesn't! Btw, I have test your example by downloading the image and IUV image. I have found that the shape of IUV image is bigger than image. Is it right? I correct the indent ,the image upload is cut. so your code is right,but the result image is too blurred ,and I find the @mbaradad 's method can solve it . thank you ~
@kekedan you need to interpolate the zero values in the convex hull for each part (or just upsample the original image), for example using cv2.inpaint as:
valid_mask = np.array((actual_part.sum(0) != 0)*1, dtype='uint8') radius_increase = 10 kernel = np.ones((radius_increase, radius_increase), np.uint8) dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1) region_to_fill = dilated_mask - valid_mask invalid_region = 1 - valid_mask actual_part_max = actual_part.max() actual_part_min = actual_part.min() actual_part_uint = np.array((actual_part - actual_part_min)/(actual_part_max - actual_part_min)*255, dtype='uint8') actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1,2,0)), invalid_region, 1, cv2.INPAINT_TELEA).transpose((2,0,1)) actual_part = (actual_part_uint/255.0)*(actual_part_max - actual_part_min) + actual_part_min #only use dilated part actual_part = actual_part * dilated_mask parts.append(actual_part)
@mbaradad Hi, thanks for your great code. However, when I use this chunk of code, I got the following error which related to the cv2.inpaint() function:
error: OpenCV(3.4.4) /io/opencv/modules/photo/src/inpaint.cpp:759: error: (-210:Unsupported format or combination of formats) 8-bit, 16-bit unsigned or 32-bit float 1-channel and 8-bit 3-channel input/output images are supported in function 'cvInpaint'
It says the data formats are not supported, however, what I print out is that the input img and mask are both 'uint8'. I don't know which part is wrong, any ideas?
@kekedan @viperit Or who can provide me with the complete notebook please. I really have no idea how to modify the code. Thanks!!
Can some kind developer please share a working notebook for this? That would instantly cure the headache I've developed after fighting another problem driven by pure brute stubbornness for days, that drained me quite a lot and I just wish to see the end of the tunnel, any help to lessen this burden would be much appreciated, thanks!
@kekedan Maybe the detection is wrong. You should show the IUV and segmentation result for clear. In my practice, i found that some fault if you use the pre-trained densepose model on the blur image.
thanks , the result like this:
iuv:
result:
you can see,the result is wrong,any advice?thanks
@kekedan can you please show full code for this. how to use it with this https://github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb
I want to use my own texture. How can I apply your code with my own image?
@garimss I have combined the above code with Densepose. https://colab.research.google.com/drive/1KJ0VucKXD9-nwWPL8oHrl-Zk-iOMdOLt Hope it will help. \ouo/
I created a Python library ( https://github.com/kuboshizuma/UVTextureConverter ) to make the conversion easy. And the notebook in it should help resolve this issue. I hope it will be useful for those who are still in trouble.
In the jupyter notebook for texture transfer, it talks about how to map the textures from the dataset onto models, but is there any way to generate these unwrapped textures from the images?
Also when I say "textures" i mean these - would love to be able to generate these from the models, or something similar to them.