KumapowerLIU / Rethinking-Inpainting-MEDFE

Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations. ECCV 2020 Oral
Other
371 stars 50 forks source link

very bad prediction. #20

Open shoutOutYangJie opened 3 years ago

shoutOutYangJie commented 3 years ago

exam1

Hi, can you predict this image. please. The mask can be obtained by the following code

def get_mask(path):
    m = cv2.imread(path)
    new_mask = np.zeros(shape=m.shape, dtype=np.uint8)
    m = np.mean(m, axis=2)
    y, x = np.where(m == 255)
    new_mask[y, x] = 255
    new_mask = Image.fromarray(new_mask)
    return new_mask

I get a very bad result. I use pre-trained model of "place2" dataset. The result is bad. output

KumapowerLIU commented 3 years ago

very interesting! I think may be the type of mask is wrong, the mask is not cover the white regions in your image. I dilate the mask simplely and make the mask boundary close to the white regions boundary, the results seems more reseaonable. As shown images blow, from left to right: the image you give, the image that the true input of model, the output and the mask input input2 Places365_val_00000475mask

I test the other mask which I have: input input2 Places365_val_00000475 04003

Your orginial image: input input2 Places365_val_00000475 mask

shoutOutYangJie commented 3 years ago

@KumapowerLIU Yes, your result is better. Thank you for helping me. The reason is that mask doesn't cover the white area. By the way, I am curious about what tool can make this mask exhibited by you.

image

codinglin commented 3 years ago

Hello author, I found that using a 128128 mask in the center of the celeba dataset does not have a particularly good effect. For example, using a 120120 mask can achieve better results. Why?

fxcdl commented 3 years ago

exam1

Hi, can you predict this image. please. The mask can be obtained by the following code

def get_mask(path):
    m = cv2.imread(path)
    new_mask = np.zeros(shape=m.shape, dtype=np.uint8)
    m = np.mean(m, axis=2)
    y, x = np.where(m == 255)
    new_mask[y, x] = 255
    new_mask = Image.fromarray(new_mask)
    return new_mask

I get a very bad result. I use pre-trained model of "place2" dataset. The result is bad. output

您好,请问您是如何测试的,为什么我测试的结果跟原始图像一模一样