issues
search
NVlabs
/
A-ViT
Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)
Apache License 2.0
138
stars
12
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
About inference stage
#14
YuanYeshang
opened
6 months ago
0
Non-zero outputs from discarded tokens
#13
bartwojcik
opened
9 months ago
0
FLOPs Reduction & Calculation
#12
oryany12
opened
9 months ago
1
A software supply-chain vulnerability detected
#11
ashishbijlani
opened
1 year ago
0
About inference stage
#10
SillyGao
opened
1 year ago
0
Cannot Reproduce Reported Accuracy
#9
johnheo
opened
1 year ago
0
Train with the provided hyperparameters and cannot get a small model with 78.8% acc1 and 3.6G FLOPs
#8
ShenZhang-Shin
opened
1 year ago
6
token nums
#7
sutiankang
opened
1 year ago
2
The accuracy of DeiT-S pretrained model you report in your paper seems to be wrong
#6
YTianZHU
opened
1 year ago
1
Unable to Reproduce top-1 Accuracy
#5
mehtadushy
opened
1 year ago
2
A question about the halting score distribution code
#4
DYZhang09
opened
1 year ago
1
Training accuracy
#3
Mandy-77
opened
1 year ago
2
The inference time of A-Vit same as the Deit.
#2
dk-liang
closed
1 year ago
2
A question about inference
#1
DYZhang09
closed
1 year ago
1