Open peter943 opened 5 years ago
Hi @peter943 that's correct! The patches do overlap by 1 pixel (stride 8). For 17 x 17 and 33 x 33 the same is true: the RFs are always shifted by 8 pixels, i.e. the overlap of the patches is even larger there. This reduces the amount of shift-invariance the network needs to learn and enables it to really take into account all features of a certain length scales.
@wielandbrendel Thanks for your prompt reply. I have a deeper understanding of the bagnets.
Hi! Thank you for your reply for my question about input of Bagnets, I learned your code today, and I calucalate the receptive field of bagnet-9 by Fomoro AI . The results are as follows. The input size is 224×224, I ignore many layers (stride=1 and kernel size =1, it does not change the receptive field ). The receptive field is 9×9, but the output size is 27×27. It means the receptive fields of different neuron overlaps because 27×9=243>224. I think there's something wrong with my results. But I could not find this problem. I would appreciate it if you could give us some advice on how to solve this problem.