Amshaker / SwiftFormer

[ICCV'23] Official repository of paper SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
236 stars 25 forks source link

question about the input of EfficientAdditiveAttnetion #1

Open Berry-Wu opened 1 year ago

Berry-Wu commented 1 year ago

Hi, thanks for your great work. In your code, I find that the input of the EfficientAdditiveAttention is reshape from (B C H W)--> (B H*W C).

self.attn(x.permute(0, 2, 3, 1).reshape(B, H * W, C))

My understanding is that the num_tokens is H*W and the dim_token is C? That's right? Its result will be better? It will be more efficient? What's the difference between yours and split the feature to patchs just like below:

# from (B C H W) --> (B N D)  N:num_tokens  D:dim_token
rearrange(feature_map, 'b c (w s1) (h s2) -> b (w h) (c s1 s2)', s1=patch_size, s2=patch_size)

Looking forward your reply! :)

Amshaker commented 1 year ago

Hi @Berry-Wu,

Thank you for your interest in our work.

Yes, in our code the number of tokens is H*W and the token dimension is C.

Splitting the features into patches is also doable and another common way, it will increase the number of dim_token C and reduce the number of tokens N. We haven't tested dividing the feature maps into patches, but it would be interesting to try it in terms of complexity, inference speed, and accuracy.

Best regards, Abdelrahman

Berry-Wu commented 1 year ago

@Amshaker Thanks! I got it. Thank you again for your reply! :)