LeapLabTHU / FLatten-Transformer

Official repository of FLatten Transformer (ICCV2023)
377 stars 21 forks source link

typical implementation form? #11

Closed Rosal-1998 closed 10 months ago

Rosal-1998 commented 10 months ago

I am attempting to reproduce your theory on a general attention mechanism, specifically by replacing softmax with flatten. However, I am having difficulty understanding the improvements made in Swin Transformer (SwinT) and Pyramid Vision Transformer (PVT). Can you provide a common implementation form. thanks a lot!

tian-qing001 commented 10 months ago

Hi @Rosal-1998, just replace the Attention module in your model with FocusedLinearAttention.