Open purple7seven opened 5 years ago
@purple7seven For the damaged images, how do you provide the mask? Did you create a mask for the damaged image?
Yes. I input the damaged image and corresponding mask image into the edge generator. The edge generator generate the masked edge but not the repaired edge. For example, if i input a damaged image with rounded mask region and corresponding mask image, the output of edge generator is a edge image with a rounded edge in the masked region.
@purple7seven Can you post your damaged image and the mask here?
@purple7seven Can you post your damaged image and the mask here?
My damaged image is:
My mask image is:
I obtain the output image with the edge generator is:
But when i input the image:
I obtain the generated edge information with same edge generator:
@purple7seven Ok now, make sure the damaged area is also colored white because this was how the training dataset was provided. But there are two problems with your mask:
.jpg
file and then converted to .png
. Please make sure your mask is constructed only with zeros and ones (binary).You can read a similar issue here: https://github.com/knazeri/edge-connect/issues/38
Thanks for your code! There is a problem that when i test the model with damaged images, i can't get the generated edge information through the edge generator. Same things as this, i can't get a inpainted image through the image inpainting network. But when i input a no damaged image and mask image, the picture can be inpainted. If i want to input a damaged image directly to test the network, what should i do?