Closed githubjqh closed 1 year ago
for dataset flowers102, models of image size 224 such as cct_7_7x2_224 should be used.
Did you set the img_size
parameter for the models? src/cct.py has all the pre-defined models and shows you how you can create your own. Our registered models are of the format cct num_layers kernel_size x num_convs img_size
. A full list can be viewed here
When I run the model vit_2_4_32, dataset is flowers102, this error will occur:
File "C:\pyprj\Compact-Transformers\src\vit.py", line 62, in forward return self.classifier(x) File "C:\Users\asp\.conda\envs\detpy\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\pyprj\Compact-Transformers\src\utils\transformers.py", line 199, in forward x += self.positional_emb RuntimeError: The size of tensor a (3137) must match the size of tensor b (65) at non-singleton dimension 1
size of x is (2,3137,65), size of positional_emb is (1,65,65) When I turn to other models, this error always occur!I use python 3.8, and pytorch=1.10.1. Thank you!