Open bio-mlhui opened 4 months ago
@bio-mlhui, thanks for your attention.
For ConvFormer, a pure CNN model, positional encoding is not necessary.
For CAFormer, its two first stages are conv, each patch "knows" which patches are nearby. I remember adding positional encoding after the first two stages and before the third stages, does not influence the performance on ImageNet. For simplicity, I did not add positional encoding in my implementation.
I notice that Metaformer has no positional encoding(PE) either in the attention layers or at the model input, does this affect the performance? Is the positional encoding not necessary? What if metaformer is equipped with 2D sin-cos/learned PE?