Open clelouch opened 4 years ago
Since this was written a long time ago, I don't remember the exact details of my decision that led me to write that architecture. From what I know about Spatial Transformer's, the localization net can be made as deep as possible since all it does is downsample the input image. You need to be careful about how much you down sample (e.g. one MaxPool2d
is sufficient for CIFAR-10x, more may be needed for ImageNet).
I agree with you that the module requires a bit more cleaning up. If you're interested, submit a PR and I'll merge it after our chat there.
Since this was written a long time ago, I don't remember the exact details of my decision that led me to write that architecture. From what I know about Spatial Transformer's, the localization net can be made as deep as possible since all it does is downsample the input image. You need to be careful about how much you down sample (e.g. one
MaxPool2d
is sufficient for CIFAR-10x, more may be needed for ImageNet).I agree with you that the module requires a bit more cleaning up. If you're interested, submit a PR and I'll merge it after our chat there.
Thanks for your reply.
I don't know what PR means... can you explain it?
PR is a pull request, just clone the repository and write your code in a different branch and submit it to the repo.
Thanks for your reply. I just read the code in https://github.com/oarriaga/STN.keras. In https://github.com/oarriaga/STN.keras/issues/11, the author points out the localization network architecture can be arbitrary. So, I guess your code don't need a modification.
Hi, thanks for your code. I want to plug the STN into a CNN, and after comparing your code with the code in pytorch tutorial, I find some differences. The tutorial code only uses two convolutional layers, while you use four layers instead. As far as I know, the 7x7 layer can be replaced with three 3x3 layers, while you use two layers. The channel number is also different. I'm curious about why you select such a architecture. what's more, in STNModule.py 20-22. I guess it might a bug?