Open Berry-Wu opened 1 year ago
Hi @Berry-Wu,
Thank you for your interest in our work.
Yes, in our code the number of tokens is H*W and the token dimension is C.
Splitting the features into patches is also doable and another common way, it will increase the number of dim_token C and reduce the number of tokens N. We haven't tested dividing the feature maps into patches, but it would be interesting to try it in terms of complexity, inference speed, and accuracy.
Best regards, Abdelrahman
@Amshaker Thanks! I got it. Thank you again for your reply! :)
Hi, thanks for your great work. In your code, I find that the input of the EfficientAdditiveAttention is reshape from (B C H W)--> (B H*W C).
My understanding is that the num_tokens is H*W and the dim_token is C? That's right? Its result will be better? It will be more efficient? What's the difference between yours and split the feature to patchs just like below:
Looking forward your reply! :)