sail-sg / metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
https://arxiv.org/abs/2210.13452
Apache License 2.0
416 stars 27 forks source link

metaformer has no positional encoding? #14

Open bio-mlhui opened 4 months ago

bio-mlhui commented 4 months ago

I notice that Metaformer has no positional encoding(PE) either in the attention layers or at the model input, does this affect the performance? Is the positional encoding not necessary? What if metaformer is equipped with 2D sin-cos/learned PE?

yuweihao commented 4 months ago

@bio-mlhui, thanks for your attention.

For ConvFormer, a pure CNN model, positional encoding is not necessary.

For CAFormer, its two first stages are conv, each patch "knows" which patches are nearby. I remember adding positional encoding after the first two stages and before the third stages, does not influence the performance on ImageNet. For simplicity, I did not add positional encoding in my implementation.