huawei-noah / Efficient-AI-Backbones

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
4.07k stars 708 forks source link

Wave-MLP looks like it uses depth-wise conv (continued) #193

Open Phuoc-Hoan-Le opened 1 year ago

Phuoc-Hoan-Le commented 1 year ago

Hi,

From the issue https://github.com/huawei-noah/Efficient-AI-Backbones/issues/191, I am still questioning how 1xK/Kx1 depth-wise can be directly translated to pure matrix multiplication or how Wave-MLP is an MLP model.

I understand that you are required to limit the window size to deal with dense prediction tasks with varying sizes of input images, but I am still wondering how 1xK/Kx1 depth-wise can be directly translated to pure matrix multiplication. From what I know MLP models such as MLP-mixer, ResMLP, etc, don't have weight sharing among pixels/patches, but they share the weights among channels.

In other words, for MLP-based models and even Swin transformers, each pixel/patch has its own filters, but the filters are shared among the channel dimension.