Open walbermr opened 4 years ago
Wow, it is truly a bug. However, I am afraid such small masks deverse no shift at all. In fact, the information flow of convolution from the outside to the inside are propagating little by little. When the mask is very big, it is hard for the network to direclty predict the missing content. In this case, shift operation makes the inpainting adpots the information not only local context but also global one. However, when the mask is very small, the shift is not necessary at all. So you can direcly skip shift operaition when mask is missing in the reduction.
Indeed. So, I will code the fix and re-run some tests to make a pull request later. Thanks for the fast answer!
Hello, I was studying your work and during some tests I found that the current approach has issues with the size of masks: if there is no mask at all, the shift will try to find a mask in the latent space, and as it does not have a case to no mask in the latent space, the code it will crash. This case can be expanded to a case where the mask is small enough to be compressed, generating no mask in some reduction, during the compression phase, crashing again.
What modifications can be done to remove that issue? Currently I am inserting a mask with 8x8 pixels in a irrelevant part of the image, but it is not optimal. As I still have not thought in a better solution, I am asking you a better approach. I am open to develop that and make a pull request in your repo.