Zhaoyi-Yan / Shift-Net_pytorch

Pytorch implementation of Shift-Net: Image Inpainting via Deep Feature Rearrangement (ECCV, 2018)
http://openaccess.thecvf.com/content_ECCV_2018/papers/Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.pdf
MIT License
363 stars 83 forks source link

Inference with images not 256x256 #92

Closed ThJOD closed 5 years ago

ThJOD commented 5 years ago

Hey, thank you for publishing the code for your project.

I am trying to run inference on images that are not 256x256 pixels, but of an arbitrary size. It seems like if I do that the program automatically crops a 256x256 image out and uses that.

Is it possible by tweaking some code to run inference for images of arbitrary size?

Thank you and best regards ThJOD

Zhaoyi-Yan commented 5 years ago

Hi ThJOD, In fact, the unet can handle images with arbitrary sizes. However, the best practice is that you need to train the model by feeding different sizes of images, otherwise, using a pretrained model you will not get good results. Here is some tips on how to load the images without crop and resizing: change the code https://github.com/Zhaoyi-Yan/Shift-Net_pytorch/blob/master/data/aligned_dataset.py#L33-L64

       # delete the random cropping and resizing code above
        if (not self.opt.no_flip) and random.random() < 0.5:
            idx = [i for i in range(A.size(2) - 1, -1, -1)] # size(2)-1, size(2)-2, ... , 0
            idx = torch.LongTensor(idx)
            A = A.index_select(2, idx)

        # let B directly equals A
        B = A.clone()

        # Just zero the mask is fine if not offline_loading_mask.
        mask = A.clone().zero_()
        if self.opt.offline_loading_mask:
            mask = Image.open(self.mask_paths[random.randint(0, len(self.mask_paths)-1)])
     #       mask = mask.resize((self.opt.fineSize, self.opt.fineSize), Image.NEAREST)
            mask = transforms.ToTensor()(mask)
ThJOD commented 5 years ago

Hey,

thank you very much! Inference worked like a charm with the proposed changes. Training had trouble with odd dimensions e.g. Image of size (1028,539,3) so I opted for resizing them to the closest lower power of 2 for now.

Best regards ThJOD

Zhaoyi-Yan commented 5 years ago

No need the closest lower power of 2, you only need to promise that the h and w should be odd number, and not less than 256. Then it will work.

Zhaoyi-Yan commented 5 years ago

Besides, when train your own model, you need to slightly change the code https://github.com/Zhaoyi-Yan/Shift-Net_pytorch/blob/master/models/shift_net/shiftnet_model.py, as it is only for 256256. You'd better generate your own masks for your own dataset as the sizes are not normal 256256. You can offline generate your masks and testing the images with them.

ThJOD commented 5 years ago

No need the closest lower power of 2, you only need to promise that the h and w should be odd number, and not less than 256. Then it will work.

Do you mean even number? So they should be divisible by 2? Ok, I will try that out tomorrow!

Besides, when train your own model, you need to slightly change the code https://github.com/Zhaoyi-Yan/Shift-Net_pytorch/blob/master/models/shift_net/shiftnet_model.py, as it is only for 256256. You'd better generate your own masks for your own dataset as the sizes are not normal 256256. You can offline generate your masks and testing the images with them.

I actually already changed that code to train on my own masks before I submitted this issue but yes, that is also necessary

Thanks again!

Zhaoyi-Yan commented 5 years ago

yes, even number...