d-li14 / involution

[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
https://arxiv.org/abs/2103.06255
MIT License
1.31k stars 177 forks source link

The calculation of the FLOPs #14

Closed peterzpy closed 3 years ago

peterzpy commented 3 years ago

This is an interesting work, but I wonder why the FLOPs of RedNet are less than the ResNet when using the same number of layers. It seems that comparing with ResNet, RedNet will additionally generate the involution weights, then slide through the feature map just like convolution operation.

d-li14 commented 3 years ago

It is because the involution kernels are shared across channels and generated by 1x1 convolution (or say linear transformation). Anyway, you could practically calculate the FLOPs of our provided network architecture.