jiequancui / ResLT

ResLT: Residual Learning for Long-tailed Recognition (TPAMI 2022)
https://arxiv.org/pdf/2101.10633.pdf
MIT License
56 stars 5 forks source link

Why 1x1 Conv can be replaced with BN layer? #10

Closed machengcheng2016 closed 11 months ago

machengcheng2016 commented 11 months ago

Greetings Really nice work, but I have a small question. You mentioned in the code comments that # 1x1 conv can be replaced with more light-weight bn layer, I wonder why they are equivalent? Actually I think conv and bn are totally different.

jiequancui commented 11 months ago

Hi,

Thanks for your interest in our work.

1x1 conv is not equivalent to a bn layer. Here I just want to stress that the branch network can be much light-weight.

Best, Jiequan

machengcheng2016 commented 11 months ago

Thanks for your quick reply! Now I see :) Have you ever tried to apply 1x1 conv instead of bn in your experiments? Does bn yield better performances or just similar?

jiequancui commented 11 months ago

HI,

1x1 conv and bn show similar results.

machengcheng2016 commented 11 months ago

ok, thank u 😄