Closed ghost closed 6 years ago
I went through more of your code under cifar
directory and found that you have implemented your own GoogLeNet where 5x5 convolutions are replaced by 2 3x3 convolutions, and also you have an extra inception module - inception_3c
. Would have been nice to mention some of these details original paper.
Leaving this note here for any other user confused by the original paper.
I have found that in your paper, it is reported that GoogLeNet's inception module will contain one 3x3 Convolution followed by 2 consecutive 3x3 convolution layer. Whereas in GoogLeNet's original paper I couldn't find those 2 3x3 consecutive convolution layers.
Do you have a reason for that or are you referring to some other form of GoogLeNet architecture?