issues
search
Amshaker
/
SwiftFormer
[ICCV'23] Official repository of paper SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
257
stars
28
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Is in_dims unnecessary in the EfficientAdditiveAttnetion module
#17
plutolove233
opened
3 months ago
0
Subject: Inquiry About Lightweight Feature Extraction with Your Attention Mechanism
#16
Zhangyuhaoo
opened
4 months ago
1
Update README.md
#15
escorciav
closed
10 months ago
0
SwiftFormer meets Android
#14
escorciav
opened
10 months ago
11
Confirm deprendencies for latency performance
#13
escorciav
closed
10 months ago
4
Fix this bug when setting distillation-type to none
#12
ThomasCai
closed
12 months ago
0
when distillation-type set none, the error occurs
#11
ThomasCai
closed
12 months ago
2
Request for SwiftFormer Segmentation Code on ADE20K Dataset
#10
FelixEdwards
opened
1 year ago
0
iphone12 vs iphone14
#9
Feynman1999
closed
1 year ago
1
EfficientAdditiveAttnetion in QKV interactive perspective.
#8
lartpang
opened
1 year ago
1
What's the complexity of additive attention?
#7
imMid-Star
closed
1 year ago
2
Code organization
#6
Amshaker
closed
1 year ago
0
Question about distillation
#5
yaozengwei
opened
1 year ago
2
Question about the softmax in EfficientAdditiveAttnetion
#4
yaozengwei
closed
1 year ago
2
cannot reproduce the fast mlmodel
#3
YingkunZhou
closed
1 year ago
2
some problems about code
#2
123456789asdfjkl
closed
1 year ago
4
question about the input of EfficientAdditiveAttnetion
#1
Berry-Wu
opened
1 year ago
2