iamhankai / ghostnet.pytorch

[CVPR2020] GhostNet: More Features from Cheap Operations
https://arxiv.org/abs/1911.11907
522 stars 116 forks source link

Why are the k*k depthwise conv only used for downsample? #11

Closed LightToYang closed 4 years ago

LightToYang commented 4 years ago

MobileNetv3 applies k*k depthwise conv for each bottleneck, but GhostNet dose not. Dose the d*d cheap op work like k*k depthwise conv for extending receptive field?

iamhankai commented 4 years ago

You're right. d*d cheap op can not only generate ghost feature maps, but also extend receptive field that works like k*k depthwise conv in MobileNetV3 bottleneck.