issues
search
SHI-Labs
/
Compact-Transformers
Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
https://arxiv.org/abs/2104.05704
Apache License 2.0
495
stars
77
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
please close this.
#84
yuedajiong
closed
2 weeks ago
3
Where is the cct source code
#83
Vzoooong
closed
1 month ago
1
Trouble with model function call in examples/main.py for CIFAR10
#82
BKJackson
opened
7 months ago
4
Is the accuracy on the test set in the figure 41.8% or 100%?
#81
jjb202
closed
11 months ago
2
Need help
#80
xhlho
opened
1 year ago
1
can you share more NLP-related scripts?
#79
TjFish
opened
1 year ago
3
Flax implementation of Vit Lite?
#78
dhyani15
opened
1 year ago
1
Unable to Replicate Text Classification Results
#77
SethPoulsen
opened
1 year ago
3
How to test my trained model?
#76
ws0352
opened
1 year ago
0
yml file settings
#75
xhlho
opened
1 year ago
0
x += self.positional_emb mismatch
#74
githubjqh
closed
1 year ago
2
Information about Text Classifier
#73
SethPoulsen
closed
1 year ago
7
About Mask Autoencoder
#72
ZK-Zhou
opened
1 year ago
2
Fix argument inconsistency in ViT-Lite
#71
alihassanijr
closed
1 year ago
0
something wrong with vit-lite
#70
EatonL
closed
1 year ago
2
NLP Results and CCT size
#69
markNZed
opened
1 year ago
2
Minor fix
#68
alihassanijr
closed
1 year ago
0
Training and evaluation scripts in examples folder
#67
rmldj
closed
1 year ago
1
Update checkpoint links
#66
alihassanijr
closed
1 year ago
0
Thank you for your nice work | Question on Flowers dataset
#65
JosephKJ
closed
2 years ago
10
Question about the batch size
#64
imhgchoi
closed
1 year ago
2
Output of the CCT classifier
#63
enrico310786
closed
1 year ago
2
Dino self-supervised vision transformer
#62
seekingdeep
closed
2 years ago
1
Image scaling and normalization
#61
enrico310786
closed
2 years ago
1
Fixed text tokenizer mask shape
#60
HosseinZaredar
opened
2 years ago
0
validation set
#59
ireneb612
closed
2 years ago
1
Fine Tuning
#58
ireneb612
closed
2 years ago
1
The question about Vit-lite model
#57
TIEHua
closed
2 years ago
2
Sinusoidal PE fix
#56
alihassanijr
closed
2 years ago
0
Fix checkpoint loading issues
#55
alihassanijr
closed
2 years ago
0
Test
#54
xuritian317
closed
2 years ago
1
AttributeError: 'TransformerClassifier' object has no attribute 'num_tokens'
#53
XiaominLi1997
closed
2 years ago
7
change TextTokenizer 2DConvolution to 1D
#52
simonlevine
opened
2 years ago
1
Mask forward fix
#51
alihassanijr
closed
2 years ago
0
Text Masked Attention: TextTokenizer.forward() should return "new_mask"
#50
simonlevine
closed
2 years ago
2
Fixes FC key mismatch for fine-tuning
#49
alihassanijr
closed
2 years ago
0
Config for training Flowers SOTA
#48
wsgharvey
closed
2 years ago
6
pretrrain
#47
cenchaojun
closed
2 years ago
2
Training cct_7_7x2_224 on imagenet
#46
iliasprc
closed
2 years ago
2
Question about reproducing CIFAR-10 results
#45
wsgharvey
closed
2 years ago
5
CIFAR10-100 image dataset
#44
iliasprc
closed
2 years ago
1
Calculation of No. of trainable parameters in ViT
#43
zhoutianyu16tue
closed
2 years ago
1
Question: Why is sequence pooling more effective than a class token?
#42
eware-godaddy
closed
2 years ago
2
interpolation of imagenet
#41
JingyangXiang
closed
2 years ago
4
Creating custom model
#40
iliasprc
closed
2 years ago
4
CIFAR-100 dataset split
#39
hellcodes
closed
2 years ago
1
In the model_urls that you gave the pretrained model's paths, I don't understand some keys like the 't' in 'cct14t-7x2' or '_sine'. What does it mean?
#38
xuritian317
closed
2 years ago
1
HyperParameters of cifar
#37
JingyangXiang
closed
2 years ago
5
Adding evaluation feature
#36
alibalapour
closed
2 years ago
0
Adding evaluation feature
#35
alibalapour
closed
2 years ago
0
Next