Closed Daming-TF closed 2 years ago
Hey @Daming-TF Thanks for the issue, we are opening a bug and it will be fixed soon.
Hi @Daming-TF , there are some Squeeze & Excite versions which uses Conv op insteaf of Linear.
Conv and Linear are equivalent when applied upon an input tensor with only one neuron per channels, which is the output after the Global average pooling operator. (BxCx1x1 or simply BxC).
In both cases the layers num MAC is represented by Cin x Cout.
There are indeed some hardware accelerators which prefer the Convolution instead of the fully connected one, so in some cases the convolution operation might run faster.
Hello@lkdci oh that's true😄, thanks I got it
Hello, the network structure of MobileNetV3 seems to be different from that of the original author. The SE module of V3 is not a module that uses SENet. The author changed nn.Line to nn.Conv